News Article
AI/ML in Medical Devices: US & EU Regulatory Perspectives
Authors:
  • Eric Henry

This edition of the Collective Learning on AI/ML column is authored by Eric Henry, Senior Quality Systems & Compliance Advisor, FDA & Life Sciences Practice, King & Spalding LLP. Opinions are his own.

Statement of fact: This article will be obsolete quickly after it is published. Use the messaging here as a springboard to further education and action and not as an authoritative source of either the current or future state of AI/ML-related regulatory requirements.

 

It’s not easy to discuss current applicable regulatory frameworks for artificial intelligence / machine learning (AI/ML)-enabled medical devices. There’s a dearth of both reliable aggregated information and the volatility of the regulatory landscape. Hopefully, this article provides at least a high-level view of the current and future state of relevant regulations, guidance, standards, and enforcement trends in the United States and European Union.

U.S. Perspective

The regulatory landscape in the U.S. is a complex mix of federal regulations, regulatory and industry guidance, and national and global standards. The U.S. Quality System Regulation describes regulatory requirements for Current Good Manufacturing Practices (cGMP) for medical devices. Although there is no specific mention of AI/ML in this regulation in place since 1996, it is still the regulation of record for all medical devices marketed and distributed in the United States. Providing more granular views of FDA’s thinking regarding design controls and medical device software are guidance documents issued between 2002 and 2023 focusing on premarket requirements, off-the-shelf software, mobile medical applications, clinical decision support, etc. The only AI/ML-specific guidance FDA has published is the draft guidance Market Submission Recommendations for a Predetermined Change Control Plan for AI/ML-Enabled Device Software Functions. The FDA has further collaborated with Health Canada and the UK’s MHRA to publish three guiding principle documents addressing Predetermined Change Control Plans (PCCPs), Good Machine Learning Practices, and Transparency.

[ Read more: As New FDA Draft Guidance Broadens Scope of PCCPs, AI Remains a Priority ]

For many medical device companies, FDA is the only federal agency in focus. This myopic view will be increasingly risky, as other agencies have increased their oversight of AI/ML in the healthcare space.

In early 2021, the FDA released its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. The action plan describes a “multipronged approach to advance the agency’s oversight of AI/ML-based medical software.” As part of the agency’s action plan, FDA liaisons participate in the standardization efforts of the Association for the Advancement of Medical Instrumentation (AAMI) (AI Committee to address the risk management of AI-driven medical devices, including the development of AAMI CR34971:2022, which is the first document of its kind to be recognized by the FDA.

Even earlier, since 2010, the Office of the National Coordinator for Health IT (ONC) has used 45 CFR Parts 170 and 171 to govern the certification of health IT and specifically Electronic Health Records (EHRs) with clinical decision support capability. In early 2024 ONC amended these regulations as the HTI-1 final rule and included a definition of “predictive decision support interventions” to address AI in health IT applications some of which may also fall under the definition of a medical device subject to FDA enforcement.

The Federal Trade Commission (FTC) is also active in enforcement of AI/ML and gains its authority from Section 5 of the FTC Act, which, in part, prohibits “unfair or deceptive acts or practices in or affecting commerce.” FTC’s most severe enforcement tool is “algorithmic disgorgement,” which forces the deletion of algorithms and associated training and testing data that the FTC determines was illegally collected and/or used.

Two more U.S. elements of the regulatory landscape include the non-binding National Institute of Standards and Technology (NIST) AI Risk Management Framework, which is a multi-sector guide to assessing and mitigating risk through governance, mapping, measurement, and management practices, and the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order has the practical effect of more government level postmarket monitoring and the use of assurance labs or similar mechanisms for performance assessment of AI models. The Executive Order references the NIST document (and an accompanying document on generative AI) as a means to “ensure the development of safe, secure, and trustworthy AI systems.”

[ Read More: New Executive Order on AI Heralds Big Changes for Medtech Space ]

Outside of government entities, existing design controls, standards, and guidance from a variety of organizations are still applicable to the development of AI/ML-enabled medical devices, with IEC 62304:2006 +AMD1:2015, Medical device software – Software life cycle processes and IEC 82304-1:2016, Health software – Part 1: General requirements for product safety leading the way. There are hundreds of global standards projects in progress addressing AI/ML in medical devices, with most still works in progress.

Rare among finished works, the AAMI has published AAMI TIR34971:2023, Application of ISO 14971 to machine learning in artificial intelligence – Guide. Heavily derived from the aforementioned AAMI AI Consensus Report and international white papers, this standard was jointly published with the British Standards Institution (designated as BS/AAMI 34971:2023 for the UK) and provides specific guidance for performing risk management for AI/ML- incorporating medical devices.

It is worth noting that, to date, FDA has cleared almost 1,000 AI/ML-enabled medical devices, but all of them have incorporated locked algorithms, which means that once trained, the algorithm stays static and does not change unless a new version of the algorithm is released by the developer. No adaptive algorithms employing continuous learning that autonomously updates the algorithm in its use environment have been cleared for use in the United States.

As the medical device industry looks to the future, it can expect a variety of new regulations, standards, guidance, and enforcement activity impacting the U.S. market. Of particular note is the finalization of FDA’s PCCP guidance, a published draft of FDA’s AI/ML medical device lifecycle guidance, and the potential use by FTC of “algorithmic disgorgement” against a medical device manufacturer / developer. The U.S. is also seeing well over a dozen states progress towards adoption of AI-related legislation mostly focused on data privacy and equity.

EU Perspective

The EU environment for AI/ML-enabled medical device regulation is equally complex, and the continent is subject to more multi-sector regulation not specific to medical devices or healthcare. Like the U.S., continuous learning is not yet an option for medical devices, and pre-existing frameworks drive the clearance of products. Many AI/ML-enabled devices have been cleared using the now obsolete Medical Device Directive (MDD), with the “new” (since 2017) Medical Device Regulation (MDR) still struggling to get traction and stability.

[Read more: Developing Artificial Intelligence Standards: A World-Wide Endeavor Is Beginning ]

For software design controls the EU heavily leverages guidances from the International Medical Device Regulators Forum (IMDRF). The EU’s Medical Device Coordination Group (MDCG) published MDCG 2019-11 in 2019 to address software qualification and classification, and it references the IMDRF documents as authoritative. In 2021, IMDRF published its only AI-related document to-date as WG (PD1)/N67, Machine Learning-enabled Medical Devices – A subset of Artificial Intelligence-enabled Medical Devices: Key Terms and Definitions.

As stated, however, many multi-sector laws and regulations are applicable to AI/ML in the medical device industry, with some challenges in harmonizing them with the EU MDR. Applicable multi-sector legislation currently in force addresses data governance, privacy, cybersecurity, and digital services.

The most notable EU legislation to impact AI/ML-enabled medical devices is the EU AI Act. The AI Act is also multi-sector and classifies the vast majority of medical devices as “high risk” regardless of the more nuanced safety-oriented device classifications driven by the EU MDR. The final text of the EU AI Act will likely have been newly published at the publication of this article but will not be in full effect for medical device companies until 2026.

Complicating the EU regulatory picture are a plethora of member state regulations and guidance.

The future state of the EU regulatory landscape will be driven largely by implementation of the EU AI Act and potential use of the European Health Data Space (EHDS) in research, training, and testing of AI models. Beyond the AI Act, the MDCG will publish one or more AI-related guidances, and the EU will likely harmonize several new or revised standards providing more clarity on AI/ML regulatory expectations.

The biggest issue to watch in the EU, however, is the harmonization (or lack thereof) of medical device-specific and multi-sector legislation, regulations, and guidance both at the EU and member state levels. The potential conflicts between regulatory expectations in the areas of privacy, data management, lifecycle requirements, cybersecurity, etc. are numerous and not likely to be resolved quickly or easily. Add to this the large number of works in progress at the regulation, standard, and guidance levels that have not yet made their presence felt.

Conclusion

The states of volatility, inconsistency, and conflict in a rapidly evolving regulatory space related to even more rapidly evolving technology led to three recommendations and one statement of fact.

First, use existing regulatory literature effectively and as creatively as possible to design, develop, and implement AI/ML-enabled medical devices.

Second, stay engaged with regulators and notified bodies to chart the most effective path through the complex and uncertain regulatory maze. Remember that both industry and external regulatory stakeholders are learning how best to proceed. Third, stay abreast of the latest developments in both regulatory and technological literature. Only through awareness and education can the medical device industry continue to influence and comply with applicable regulatory frameworks.

This site uses cookies. By continuing to use our website, you are agreeing to our privacy policy.

Accept