The FDA’s guidance highlights several critical components:
1. A Risk-Based Credibility Framework
AI models must be assessed based on their intended use, or context of use (COU), with varying levels of oversight and stringency depending on the potential impact on patient safety or study reliability.
Key factors include:
- Performance criteria
- Risk mitigation strategies
- Tailored documentation requirements
2. Scope of AI Applications
The guidance focuses on AI’s role in nonclinical, clinical, post marketing, and manufacturing phases, specifically for regulatory decisions related to drug safety, effectiveness, and quality.
Excluded from the guidance are:
- AI in drug discovery
- Applications enhancing operational efficiency without direct patient or study impacts
3. Early Engagement with FDA
The FDA strongly encourages sponsors to engage early, especially for higher-risk applications like post marketing pharmacovigilance. Clear documentation and alignment with the agency’s expectations are key to a smooth evaluation process.
4. Encouraging Innovation in Pharmacovigilance
To foster innovation, the FDA has also launched the Emerging Drug Safety Technology Meeting (EDSTM) Program, a platform for discussing new technologies and their potential applications in pharmacovigilance.
The FDA is calling on industry leaders, researchers, and other stakeholders to provide feedback on the draft guidance by April 7, 2025.