Intro
Medical Devices that contain AI-driven functions have been the focus of Regulatory Agencies in both the EU and the US for the past 2 years, with the FDA taking the lead in releasing regulations and guidance on the matter.
On January 6, 2025 the FDA released a new Draft Guidance “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations”, ref. AI-Enabled Device Software Functions. This document is intended to guide a company in providing adequate evidence in marketing submissions for AI-enabled devices, in a format that the FDA will find easier to review.
The guidance itself does not prescribe a significant amount of additional activities or evidence for these devices, but it provides useful clarifications and details on the different content of a submission.
What’s new?
The 67-pages long document consists of 13 sections plus 6 Appendices. Like many other FDA guidance documents, the Appendices provide useful details and examples and the practical application of the main document body.
There’s not much “NEW” content, but plenty of details and explanations.
TPLC
The guidance reinforces the importance of a “Total Product Life-Cycle” approach to the management of AI-enabled medical devices. The most important aspect of the Total Product Lifecycle (TPLC) approach in the FDA‘s guidance on AI-enabled medical devices is the comprehensive management of risks throughout the entire lifecycle of the device, from design and development to real-world use and potential decommissioning.
This approach emphasizes:
- Continuous monitoring and improvement: the TPLC approach allows for ongoing algorithm changes and improvements while ensuring safety and effectiveness.
- Risk-based oversight: FDA applies a risk-based approach to determine specific testing and applicable recommendations for AI-enabled devices.
- Quality systems and good machine learning practices: the guidance establishes clear expectations on quality systems and good ML practices throughout the device lifecycle.
- Transparency and bias mitigation: the TPLC approach incorporates strategies to address transparency and control bias from the earliest stages of device design through decommissioning.
- Premarket review and post market monitoring: the approach combines premarket review for devices requiring submission with continued monitoring and evaluation of performance in real-world settings.
For more information of the FDA’s TPLC see Total Product Life Cycle for Medical Devices
QMS content
Aspects related to AI should be disseminated throughout the existing QS processes. In particular:
-
Cybersecurity;
-
Risk Management;
-
UI / User Information / Labelling;
-
Usability / Human Factor Engineering;
-
Data Management;
-
Device Validation;
-
Post-Market Surveillance and updates;
are all items discussed in the guidance and are part of any QMS, so additional considerations related to AI can naturally fit into it.
Data Management is probably the only aspect that is much more important for AI-enabled devices than standard SaMD or software embedded in Medical Devices.
Device Description
The guidance describes what type of information the FDA expects in a market submission related to the description of the medical device. Reasonably, the reviewers want a detailed overview of the AI model, to be able to quickly understand the context and speed-up the review.
-
How the AI is used to achieve the device’s intended use, in particular if this automates existing manual processes;
-
I/O;
-
Level of automation and manual intervention of the user;
-
Special considerations about user characteristics required to interface with the AI;
-
How the user can affect or interpret (or mis-interpret) the results;
UI and Labeling
Rather than having a AI-driven black-box, the user should be made aware of how the AI processes inputs and provide outputs. This will provide the user with the knowledge to interpret the results and, if appropriate, supplement them with their expertise. The users should be aware of the AI limitations and potential errors, performance metrics, validation dataset, model architecture and more.
Being transparent to the user will significantly increase their confidence in the AI and its outputs.
Risk Assessment
The guidance reinforces the issue discussed in the UI/Labeling section, i.e. the importance of risks that are related to understanding information that is necessary to use or interpret the device, including risks related to lack of information or unclear information. This fits naturally in the Usability Engineering process, in particular the URRA (Use Related Risk Analysis, part of the Usability process).
Naturally, post-market activities related to risk management must include AI aspects too.
Data Management
This is a key aspect of AI-enabled devices. Training and Validation datasets are key part of how the AI process information. The FDA guidance asks the manufacturer to provide information about:
- How the data was collected;
- Reference standard (the “true” result);
- Data Annotation process (if any);
- Data storage;
- Management and Independence of Data;
- Representativeness of Data;
Manufacturers will have to produce a substantial amount of information to ensure the FDA understands and agrees on the quality of the used Data.
Model Description and Development
This can be considered an extension of the Device Description, with the same purpose of supporting the FDA’s ability to assess the safety and effectiveness of an AI-enabled device and determine the device’s performance testing specifications. These are technical, AI-specific details.
- Model Description: an explanation of each model used as part of the AI-enabled device, including (as applicable) a description of the technical elements of the model that allow for and control customization, any quality control criteria or algorithms and a description of any methods applied to the input and/or output data.
- Model Development: a description of how the model was trained including (as applicable) a description of the metrics and results obtained, an explanation of any pre-trained models that were used, a description of the use of ensemble methods, an explanation of how any thresholds were determined and an explanation of any calibration of the model output.
Validation
AI-enabled devices should follow the usual process of Design Validation and Clinical Studies may be required to prove it Safety and Efficacy. The guidance focuses on two aspects:
- Human Factors Validation
- Performance Validation
For the latter, the guidance provides extensive inputs in what should be included in the Study Protocols and Study Results.
Device Performance Monitoring
This is an extension of the general PMS activities, with focus on changes that are part of the inputs to the model or relevance of the model, such as changes in patient demographics or disease prevalence, shifts in input data, input data integrity and changes in users’ behaviour or in user demographics.
Cybersecurity
Data used to train and verify the model, and the model itself are now critical Assets that the company must protect. Threats can include:
-
Data Poisoning;
-
Model inversion/stealing;
-
Model Evasion;
-
Data leakage;
-
Overfitting;
-
Model Bias;
-
Performance Drift.
Note that these may come from both adversarial and non-adversarial events.
The guidance also lists a number of controls that the company should consider.
Public Submission Summary
This section of the Guidance maps the information above to the sections of the submission where they should reside or referred to.
Conclusion & Call to Action
This FDA draft guidance, released on January 6, 2025, is indeed the first comprehensive guidance for AI-enabled devices throughout the total product lifecycle. It complements the recently issued final guidance on predetermined change control plans and aligns with the FDA‘s AI/ML Action Plan and Good Machine Learning Practices (GMLP).The FDA is currently seeking public comments on this draft guidance until April 7, 2025, and will hold a webinar on February 18, 2025, to discuss it further.
Given the importance and timeliness of this topic, SoftComply’s planned Live Online Forum on January 30, 2025, provides an excellent opportunity for industry professionals to discuss the implications of this draft guidance with experts and peers. Register here to join the discussion and ask any questions you may have.