On 27 October 2021, the U.S. Food and Drug Administration (“FDA”), Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (“MHRA”) (together the “Regulators”) jointly published 10 guiding principles to inform the development of Good Machine Learning Practice (“GMLP”) for medical devices that use artificial intelligence and machine learning (“AI/ML”).
AI and ML have the “potential to transform health care” through their ability to analyse vast amounts of data and learn from real-world use. However, these technologies also pose unique challenges, given their complexity and the constantly evolving, data-driven nature of their development. The Regulators formed the guiding principles to “help promote safe, effective, and high-quality medical devices that use . . . AI/ML” and to “cultivate future growth” in this fast paced field.
The Regulators predict that the guiding principles could be used to: (i) adopt good practices from other sectors; (ii) tailor these practices to the medical technology/healthcare sector; and (iii) create new practices specific to the medical technology/healthcare sector. The Regulators expect these joint principles to inform broader international engagements as well.
The 10 Guiding Principles
The guidance published by the Regulators set out the 10 principles in full; however, in short, they recommend:
- Leveraging multi-disciplinary expertise throughout the total product life cycle
- Implementing good software engineering and security practices
- Ensuring clinical study participants and data sets are representative of the intended patient population
- Making training data sets independent of test sets
- Basing selected reference datasets upon best available methods
- Tailoring the model design to the available data and ensuring it reflects the intended use of the device
- Placing focus on the performance of the human-AI team
- Ensuring testing demonstrates device performance during clinically relevant conditions
- Providing users with clear, essential information
- Monitoring deployed models for performance and managing re-training risks
These principles cover the entire life cycle of devices with the aim of ensuring safety and efficacy. The Regulators have focused on use of appropriate datasets and carrying out sufficient testing before marketing AI/ML-based devices. These guiding principles set out an ongoing recommendation to manage risks, which will involve monitoring and potentially re-training AI/ML-based devices after deployment.
These principles are merely a starting point. The Regulators stated, “[a]s the AI/ML medical device field evolves, so too must GMLP best practice and consensus standards.”
Possible Impact & International Considerations
AI and ML are clearly top priorities from a global health regulatory perspective. The Regulators expect this collaboration to lead to further and broader international collaborative work. As noted above, the Regulators expect these guidelines to evolve and emphasize the importance of “strong partnerships with [their] international public health partners.”
As one example, the guiding principles identify areas of possible collaboration for the International Medical Device Regulators Forum (“IMDRF”), international standards organizations, and other collaborative bodies. These areas include “research, creating educational tools and resources, international harmonization, and consensus standards.”
This collaboration is important as it follows on from the individual work each agency has been doing in this space. For example, MHRA has consulted on the future regulation of medical devices in the UK, including by developing a Work Programme for Software and AI-based Medical Devices (which we previously discussed in our blog post). FDA has also been active in the AI/ML space, and several more FDA digital health developments are on the horizon for 2022. Through this international regulatory collaboration it appears the Regulators are working towards a united front through close alignment on best practice and international regimes. It also shows, for example, that the UK is considering international regimes broadly, rather than simply aligning with the European Union.
In sum, it appears there is an appetite for further international regulatory collaboration, so watch this space for the potential development of more detailed and sector specific international standards and practices for AI/ML-based technologies.