On October 19, 2023, the World Health Organization (“WHO”) published a set of regulatory considerations on artificial intelligence (“AI”) for health (press release and full publication). The publication is not guidance or policy but is intended as a resource for relevant stakeholders in medical devices ecosystems, including manufacturers who design and develop AI-embedded medical devices, regulators in the process of identifying approaches to manage and facilitate AI systems, and health practitioners who wish to use medical devices and AI systems in clinical practice.
The WHO recognizes the vital and ever-increasing role AI is playing in healthcare. However, it highlights the risks that these technologies may pose. It responds to the growing need to responsibly manage AI health technologies by outlining six areas for governments and regulators to consider when developing and/or updating their legislation and guidance on AI at national or regional levels.
AI in Healthcare
AI has the potential to transform the health sector. It can improve health outcomes at different stages of healthcare provision. For example, it can work at an early/preventative stage by encouraging self-care, prevention and wellness. It can help in research and development by strengthening outcomes of clinical trials. AI also has the potential to improve the clinical care of patients for example, by personalizing care, improving diagnosis and treatment, enhancing the delivery of care and health system efficiency; and supplementing healthcare professionals’ knowledge, skills and competencies.
Risk and Challenges of AI in Healthcare
The WHO acknowledges that with the rapid deployment of AI, the use of AI technology in healthcare also presents significant challenges. There may be a lack of understanding of how complex AI technologies work (i.e. the ‘black box’ problem) with a potential for patients to be harmed. AI technologies, especially those that use machine learning processes, are vulnerable to biases present in the model itself and the training data. Further, AI technologies used in healthcare often have access to sensitive personal data. There is therefore a need to effectively safeguard and manage this data, as well as the use for which AI is deployed.
How to Regulate AI in Healthcare
To manage these risks the WHO outlines six key considerations for the regulation of AI in healthcare:
1. Transparency and Documentation
WHO emphasizes the need for documentation and transparency to build trust in AI systems, as well as developers, manufacturers and end-users. It recommends that developers of AI technology document the entire product lifecycle, as well as tracking development processes. Effective documentation and transparency help guard against data biases and manipulation.
2. Risk Management
The WHO also highlights that AI systems used in healthcare may fall into many categories, including AI systems used in medical devices. WHO recommends taking a holistic risk-based approach throughout the entire product lifecycle, including during pre- and post-market deployment. It recommends that developers, manufacturers and deployers of AI technology address issues relating to intended use, continuous learning, human interventions, training models and cybersecurity threats.
3. Intended Use and Analytical and Clinical Validation
WHO highlights the importance of clearly setting out an AI system’s intended use. Given that AI systems are dependent on the code, training data, setting and user interaction, it is important to describe the relevant use case in order for the system to perform safely and effectively.
Additionally, WHO recommends undertaking the appropriate validation of the AI system. It recommends that data are externally validated and that it is clear how those data are used. External validation ensures the quality and safety of input data used in models, minimizing risk to patients.
4. Data Quality
Data is the bedrock of AI systems and there are huge amounts of data in healthcare. However, the use of poor quality and biased data sets is of significant concern. WHO recommends the use of rigorous evaluation systems pre-release to ensure data quality. This should ensure systems do not amplify biases.
5. Privacy and Data Protection
WHO recommends that developers, deployers and manufacturers of AI technology understand the scope and requirements of relevant data protection and privacy laws. The EU General Data Protection Regulation (“GDPR”) and the U.S. Health Insurance Portability and Accountability Act (“HIPAA”) are of particular relevance.
6. Engagement and Collaboration
WHO recommends that developers, manufacturers, health-care practitioners, patients, policymakers, regulatory bodies and other stakeholders actively engage and collaborate with each other to improve the safety and quality of AI technologies. It notes that many regulatory bodies (including the UK Medicines and Healthcare products Regulatory Agency, the European Commission, the U.S. Food and Drug Administration and other international regulators) are already engaging and collaborating on AI. The WHO considers it fundamental to streamline the oversight process for AI regulation through such engagement and collaboration in order to accelerate practice-changing advances in AI.
What Does This Mean for AI in Healthcare?
The WHO’s publication intends to outline the key principles that governments and regulatory authorities should consider when developing new guidance or adapting existing guidance on AI at a national or regional level.
The publication picks up on some key themes. Firstly, the (potential) importance of AI in the healthcare sector. Secondly, the need to effectively regulate AI to ensure patient safety. And thirdly, that stakeholders continue to engage and that the community at large works towards shared understanding and mutual learning. As such, we expect to see more international guidance and principles being published on AI in the short term.
If you have any queries concerning the material discussed in this client alert or the use of AI in healthcare more broadly, please contact members of our Food, Drug, and Device practice.