HAL 9000

AI in Medical Devices

22 Jan 2024
by Marion

The availability of AI on the web has not materialized like the murderous SkyNet from the Terminator Series or HAL9000 from 2001: A Space Odyssey. Instead, it is a consumer tool to generate funny pictures or get school assignments done with no effort.

AI tools have, in fact, been around for a while – used for search optimization, tailored marketing and more. The same can be said for AI in electronic devices, including Medical Devices. We just used the terms like Machine Learning, Language Models and “algorithms”.

The potential applications for these tools are undoubtedly limitless. They can process enormous amount of data and make predictions – more or less accurate – based on the existing pool of information.

The importance of the “training data” cannot be underestimated. There are several examples of ChatGPT “hallucinating” and reporting completely false information.

In the MedTech sector, we want AI to be able to make educated predictions, without becoming “creative”. We don’t need (yet?) self-aware AI, we have our familiar trained clinicians for this. We can leave creative AI for blue skies research, or to Johnny who wants to create a picture of Nicolas Cage singing Trash Metal.

But even then, we need some regulations or limits for AI application in MedTech. The FDA recently released a Draft guidance about how the continuous re-training of AI and Machine Learning enabled Medical Devices should be managed.

The EU went a step further and is trying to approve an AI Act which covers the general application of AI across all industries. Based on the AI Act, most AI-enabled Medical Devices will fall into the high-risk category of AI systems that are required to pass a conformity assessment prior to market access. If you are a Notified Body struggling to get out of the MDR-IVDR backlog, this is another curve ball for you.

There are many interesting readings about AI in medtech, but they can be summarized as follows:

  1. Make sure the AI does what is supposed to do, within predefined boundaries. You don’t want them to become too “wild”;
  2. Make sure that the data used for training is 100% reliable, and the same for new data that will be added to the pool; security of this data is also paramount;
  3. Clearly communicate to the user what the AI is supposed to do, how, what are its limits and what type of human supervision is required.

In summary, AI should be treated like any other software tool able to analyse large quantities of data and make sense of it, but the real “intelligence” will always reside with the Developers who created and trained it.

After all, like my primary school teacher used to say “A Computer is an incredibly fast but still a deeply dumb object”.

Suggested watch: War Games (1983)

Try us out on
Atlassian

SoftComply apps are available on Atlassian Marketplace – you can try them all out for free!