AI-enabled Medical Devices – FDA Guidance

Picture of Matteo Gubellini
Matteo Gubellini
Regulatory Affairs Manager
January 20, 2025

Intro

Medical Devices that contain AI-driven functions have been the focus of Regulatory Agencies in both the EU and the US for the past 2 years, with the FDA taking the lead in releasing regulations and guidance on the matter.

On January 6, 2025 the FDA released a new Draft Guidance “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations”, ref. AI-Enabled Device Software Functions. This document is intended to guide a company in providing adequate evidence in marketing submissions for AI-enabled devices, in a format that the FDA will find easier to review.

The guidance itself does not prescribe a significant amount of additional activities or evidence for these devices, but it provides useful clarifications and details on the different content of a submission.

What’s new?

The 67-pages long document consists of 13 sections plus 6 Appendices. Like many other FDA guidance documents, the Appendices provide useful details and examples and the practical application of the main document body.

There’s not much “NEW” content, but plenty of details and explanations.

TPLC

The guidance reinforces the importance of a “Total Product Life-Cycle” approach to the management of AI-enabled medical devices. The most important aspect of the Total Product Lifecycle (TPLC) approach in the FDA‘s guidance on AI-enabled medical devices is the comprehensive management of risks throughout the entire lifecycle of the device, from design and development to real-world use and potential decommissioning.

This approach emphasizes:

  1. Continuous monitoring and improvement: the TPLC approach allows for ongoing algorithm changes and improvements while ensuring safety and effectiveness.
  2. Risk-based oversight: FDA applies a risk-based approach to determine specific testing and applicable recommendations for AI-enabled devices.
  3. Quality systems and good machine learning practices: the guidance establishes clear expectations on quality systems and good ML practices throughout the device lifecycle.
  4. Transparency and bias mitigation: the TPLC approach incorporates strategies to address transparency and control bias from the earliest stages of device design through decommissioning.
  5. Premarket review and post market monitoring: the approach combines premarket review for devices requiring submission with continued monitoring and evaluation of performance in real-world settings.

For more information of the FDA’s TPLC see Total Product Life Cycle for Medical Devices

QMS content

Aspects related to AI should be disseminated throughout the existing QS processes. In particular:

  • Cybersecurity;

  • Risk Management;

  • UI / User Information / Labelling;

  • Usability / Human Factor Engineering;

  • Data Management;

  • Device Validation;

  • Post-Market Surveillance and updates;

are all items discussed in the guidance and are part of any QMS, so additional considerations related to AI can naturally fit into it.

Data Management is probably the only aspect that is much more important for AI-enabled devices than standard SaMD or software embedded in Medical Devices.

Device Description

The guidance describes what type of information the FDA expects in a market submission related to the description of the medical device. Reasonably, the reviewers want a detailed overview of the AI model, to be able to quickly understand the context and speed-up the review.

  • How the AI is used to achieve the device’s intended use, in particular if this automates existing manual processes;

  • I/O;

  • Level of automation and manual intervention of the user;

  • Special considerations about user characteristics required to interface with the AI;

  • How the user can affect or interpret (or mis-interpret) the results;

UI and Labeling

Rather than having a AI-driven black-box, the user should be made aware of how the AI processes inputs and provide outputs. This will provide the user with the knowledge to interpret the results and, if appropriate, supplement them with their expertise. The users should be aware of the AI limitations and potential errors, performance metrics, validation dataset, model architecture and more.

Being transparent to the user will significantly increase their confidence in the AI and its outputs.

Risk Assessment

The guidance reinforces the issue discussed in the UI/Labeling section, i.e. the importance of risks that are related to understanding information that is necessary to use or interpret the device, including risks related to lack of information or unclear information. This fits naturally in the Usability Engineering process, in particular the URRA (Use Related Risk Analysis, part of the Usability process).

Naturally, post-market activities related to risk management must include AI aspects too.

Data Management

This is a key aspect of AI-enabled devices. Training and Validation datasets are key part of how the AI process information. The FDA guidance asks the manufacturer to provide information about:

  1. How the data was collected;
  2. Reference standard (the “true” result);
  3. Data Annotation process (if any);
  4. Data storage;
  5. Management and Independence of Data;
  6. Representativeness of Data;

Manufacturers will have to produce a substantial amount of information to ensure the FDA understands and agrees on the quality of the used Data.

Model Description and Development

This can be considered an extension of the Device Description, with the same purpose of supporting the FDA’s ability to assess the safety and effectiveness of an AI-enabled device and determine the device’s performance testing specifications. These are technical, AI-specific details.

  1. Model Description: an explanation of each model used as part of the AI-enabled device, including (as applicable) a description of the technical elements of the model that allow for and control customization, any quality control criteria or algorithms and a description of any methods applied to the input and/or output data.
  2. Model Development: a description of how the model was trained including (as applicable) a description of the metrics and results obtained, an explanation of any pre-trained models that were used, a description of the use of ensemble methods, an explanation of how any thresholds were determined and an explanation of any calibration of the model output.

Validation

AI-enabled devices should follow the usual process of Design Validation and Clinical Studies may be required to prove it Safety and Efficacy. The guidance focuses on two aspects:

  1. Human Factors Validation
  2. Performance Validation

For the latter, the guidance provides extensive inputs in what should be included in the Study Protocols and Study Results.

Device Performance Monitoring

This is an extension of the general PMS activities, with focus on changes that are part of the inputs to the model or relevance of the model, such as changes in patient demographics or disease prevalence, shifts in input data, input data integrity and changes in users’ behaviour or in user demographics.

Cybersecurity

Data used to train and verify the model, and the model itself are now critical Assets that the company must protect. Threats can include:

  • Data Poisoning;

  • Model inversion/stealing;

  • Model Evasion;

  • Data leakage;

  • Overfitting;

  • Model Bias;

  • Performance Drift.

Note that these may come from both adversarial and non-adversarial events.

The guidance also lists a number of controls that the company should consider.

Public Submission Summary

This section of the Guidance maps the information above to the sections of the submission where they should reside or referred to.

Conclusion & Call to Action

This FDA draft guidance, released on January 6, 2025, is indeed the first comprehensive guidance for AI-enabled devices throughout the total product lifecycle. It complements the recently issued final guidance on predetermined change control plans and aligns with the FDA‘s AI/ML Action Plan and Good Machine Learning Practices (GMLP).The FDA is currently seeking public comments on this draft guidance until April 7, 2025, and will hold a webinar on February 18, 2025, to discuss it further.

Given the importance and timeliness of this topic, SoftComply’s planned a Live Online Forum on January 30, 2025, where industry professionals discuss the implications of this draft guidance with experts and peers.

Watch the Live Discussion

 

Table of Contents

Ready to get started?

Contact us to book a demo and learn how SoftComply can cover all your needs

Information Security Jira
Picture of Marion Lepmets

Marion Lepmets

CEO
February 20, 2025

Like with any compliance journey, you should first establish why you need to be compliant with a certain regulation. ISO 27001 certification is widely used to build trust and credibility with customers and stakeholders. Similarly, in the Atlassian ecosystem, the requirement of obtaining ISO 27001 certificate applies to Marketplace Partners...

eat your own dog food
Picture of Monika Isak

Monika Isak

Head of Growth
February 20, 2025

Atlassian’s updated Marketplace Partner Program underscores the need for robust security management. With increasing customer expectations around data protection, security, and compliance transparency, Gold and Platinum Marketplace Partners are required to demonstrate adherence to compliance framework like SOC 2 or globally recognised standards such as ISO 27001. This shift is...

RMP Automation
Picture of Marion Lepmets

Marion Lepmets

CEO
February 19, 2025

Risk Manager Plus on Jira Cloud is the most advanced risk management app supporting a wide range of risk management frameworks. You can easily customize the built-in Risk Models or build your own Risk Model from scratch, e.g. 2- or 3-dimensional Risk Matrix or Risk Score based ones. You can...