Skip to content

AI Regulation Lagging Behind Innovation and Creating Friction

Developers of artificial intelligence (AI) products have moved quickly into a wide range of product types, but regulatory agencies are struggling to keep pace. One of the issues for industry is that regulators are working from somewhat different approaches, which significantly complicates efforts to obtain regulatory approval across national boundaries.

Nov 2025 Blog AI Regulation 1200x625 pxRegulatory harmonization is not completely absent from the international regulatory picture for AI as seen in the Oct. 27, 2021, publication of guiding principles for good machine learning (GMLP) practices. The FDA, Health Canada, and the U.K. Medicines and Healthcare Products Regulatory Agency (MHRA) collaborated on this publication, the earliest AI-specific policy document by Health Canada and MHRA. The European Union, Australia’s Therapeutic Goods Administration (TGA), and China’s National Medical Products Administration (NMPA) were not involved in this machine learning (ML) policy collaboration.

These guidelines include several widely accepted recommendations for the ML subset of AI, such as that developers use separate data sources to develop training data sets and test sets for developing and validating their algorithms. This and several other recommendations, such as continuous postmarket surveillance, would find parallels in most existing national or supranational regulatory requirements.

Health Canada, the FDA and MHRA collaborated a second time on another set of principles related to the ML subset, the June 13, 2024, guidelines for transparency for ML-enabled medical devices. The principles of transparency include an explanation of the underlying technologies and a characterization of the training and test data sets. Beyond these two sets of principles for ML, these three agencies’ approaches to AI and ML begin to diverge significantly, but it may be important to highlight the role of the International Medical Device Regulators Forum (IMDRF) in AI as well.

In May 2022, IMDRF released a document providing key terms and definitions related to ML-enabled medical devices. This document describes the types of products that would and would not fall into the category of ML-enabled devices, but IMDRF acknowledged that different regulatory systems have adopted different views of the products that do or do not fall into the category of a medical device. One example is the group of devices used for in vitro fertilization, although the definitional and other tensions do not end there.

The IMDRF terminology guidance stated that the definition of a locked ML product also varies between regulatory systems. Some regulatory regimes define the term as a permanently locked algorithm while others use the term to describe a system that can be updated only by the user or administrator. This is another potential source of confusion for sponsors seeking to obtain marketing authorization in multiple regulatory jurisdictions.

Another component of the international regulatory picture is the European Union’s Artificial Intelligence Act (AI Act), which affects both the EU economy as a whole and medical technology in particular. The AI Act has drawn criticism for its overlap with existing regulations for medical devices and in vitro diagnostics. The Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR), both promulgated in 2017, are still not fully deployed, and the addition of the AI Act has created another barrier to market access in the 27 EU member states, given the Act’s stipulation that virtually all medical uses of AI are necessarily high risk.

In contrast, most of the AI products granted market access in the U.S. are moderate-risk devices cleared via the 510(k) pathway, allowing the developer to rely on a pre-existing version of the device to demonstrate compliance with FDA regulations. The high-risk designation in the EU imposes significant premarket demands on developers that they do not face in other regulatory frameworks, yet another source of disharmony in an increasingly fragmented regulatory picture for AI.

This blog is the first in a series of blogs on AI and ML in medical devices and diagnostics. Stay tuned for future blogs that examine the regulatory picture for AI and ML in the weeks ahead.

For additional resources contact the Marketing department

Phone: 888-633-6272

Medmarc is a member of ProAssurance Group, a family of specialty liability insurance companies. The product material is for informational purposes only. In the event any of the information presented conflicts with the terms and conditions of any policy of insurance offered from ProAssurance, its subsidiaries, and its affiliates, the terms and conditions of the actual policy will apply.

Copyright © 2025 - Medmarc

Back to Blog