AI in Medical Devices

January 22, 2024

The availability of AI on the web has not materialized like the murderous SkyNet from the Terminator Series or HAL9000 from 2001: A Space Odyssey. Instead, it is a consumer tool to generate funny pictures or get school assignments done with no effort.

AI tools have, in fact, been around for a while – used for search optimization, tailored marketing and more. The same can be said for AI in electronic devices, including Medical Devices. We just used the terms like Machine Learning, Language Models and “algorithms”.

The potential applications for these tools are undoubtedly limitless. They can process enormous amount of data and make predictions – more or less accurate – based on the existing pool of information.

The importance of the “training data” cannot be underestimated. There are several examples of ChatGPT “hallucinating” and reporting completely false information.

In the MedTech sector, we want AI to be able to make educated predictions, without becoming “creative”. We don’t need (yet?) self-aware AI, we have our familiar trained clinicians for this. We can leave creative AI for blue skies research, or to Johnny who wants to create a picture of Nicolas Cage singing Trash Metal.

But even then, we need some regulations or limits for AI application in MedTech. The FDA recently released a Draft guidance about how the continuous re-training of AI and Machine Learning enabled Medical Devices should be managed.

The EU went a step further and is trying to approve an AI Act which covers the general application of AI across all industries. Based on the AI Act, most AI-enabled Medical Devices will fall into the high-risk category of AI systems that are required to pass a conformity assessment prior to market access. If you are a Notified Body struggling to get out of the MDR-IVDR backlog, this is another curve ball for you.

There are many interesting readings about AI in medtech, but they can be summarized as follows:

  1. Make sure the AI does what is supposed to do, within predefined boundaries. You don’t want them to become too “wild”;
  2. Make sure that the data used for training is 100% reliable, and the same for new data that will be added to the pool; security of this data is also paramount;
  3. Clearly communicate to the user what the AI is supposed to do, how, what are its limits and what type of human supervision is required.

In summary, AI should be treated like any other software tool able to analyse large quantities of data and make sense of it, but the real “intelligence” will always reside with the Developers who created and trained it.

After all, like my primary school teacher used to say “A Computer is an incredibly fast but still a deeply dumb object”.

Suggested watch: War Games (1983)

Table of Contents

Ready to get started?

Contact us to book a demo and learn how SoftComply can cover all your needs

Compliance Workshop cover page
Picture of Marion Lepmets

Marion Lepmets

CEO
October 15, 2025

During Atlassian Team25 Europe, the Compliance Alliance hosted the 4th Compliance Workshop in Barcelona. Despite a wild thunderstorm, nearly 30 compliance enthusiasts braved the rain to join the workshop – a session packed with insights on AI in regulated industries, Atlassian Isolated Cloud, Cybersecurity of Marketplace Cloud apps, and selling...

Vendor Security Risk Assessment in Jira
Picture of Marion Lepmets

Marion Lepmets

CEO
October 1, 2025

Every company depends on others to survive. From your cloud provider to your payroll processor, your business is connected to a web of vendors. But here’s the reality: over 60% of data breaches originate from third-party vendors. This is why managing your vendor security risks has become more important than...

31000
Picture of Marion Lepmets

Marion Lepmets

CEO
September 22, 2025

Most companies have informal risk discussions in meetings. You know the type – “What happens if our lead developer leaves?” or “What if this big deal doesn’t close?”. These conversations usually end without any real action plan and you find yourself talking about the same risks over and over again....