On October 19, 2023, the World Health Organization (“WHO”) published a set of regulatory considerations on artificial intelligence (“AI”) for health (press release and full publication).  The publication is not guidance or policy but is intended as a resource for relevant stakeholders in medical devices ecosystems, including manufacturers who design and develop AI-embedded medical devices, regulators in the process of identifying approaches to manage and facilitate AI systems, and health practitioners who wish to use medical devices and AI systems in clinical practice.

The WHO recognizes the vital and ever-increasing role AI is playing in healthcare.  However, it highlights the risks that these technologies may pose.  It responds to the growing need to responsibly manage AI health technologies by outlining six areas for governments and regulators to consider when developing and/or updating their legislation and guidance on AI at national or regional levels.

AI in Healthcare

AI has the potential to transform the health sector.  It can improve health outcomes at different stages of healthcare provision.  For example, it can work at an early/preventative stage by encouraging self-care, prevention and wellness.  It can help in research and development by strengthening outcomes of clinical trials.  AI also has the potential to improve the clinical care of patients for example, by personalizing care, improving diagnosis and treatment, enhancing the delivery of care and health system efficiency; and supplementing healthcare professionals’ knowledge, skills and competencies.

Risk and Challenges of AI in Healthcare

The WHO acknowledges that with the rapid deployment of AI, the use of AI technology in healthcare also presents significant challenges.  There may be a lack of understanding of how complex AI technologies work (i.e. the ‘black box’ problem) with a potential for patients to be harmed.  AI technologies, especially those that use machine learning processes, are vulnerable to biases present in the model itself and the training data.   Further, AI technologies used in healthcare often have access to sensitive personal data.  There is therefore a need to effectively safeguard and manage this data, as well as the use for which AI is deployed.   

How to Regulate AI in Healthcare

To manage these risks the WHO outlines six key considerations for the regulation of AI in healthcare:

1. Transparency and Documentation

WHO emphasizes the need for documentation and transparency to build trust in AI systems, as well as developers, manufacturers and end-users.  It recommends that developers of AI technology document the entire product lifecycle, as well as tracking development processes.  Effective documentation and transparency help guard against data biases and manipulation.

2. Risk Management

The WHO also highlights that AI systems used in healthcare may fall into many categories, including AI systems used in medical devices.  WHO recommends taking a holistic risk-based approach throughout the entire product lifecycle, including during pre- and post-market deployment.  It recommends that developers, manufacturers and deployers of AI technology address issues relating to intended use, continuous learning, human interventions, training models and cybersecurity threats. 

3. Intended Use and Analytical and Clinical Validation

WHO highlights the importance of clearly setting out an AI system’s intended use.  Given that AI systems are dependent on the code, training data, setting and user interaction, it is important to describe the relevant use case in order for the system to perform safely and effectively.

Additionally, WHO recommends undertaking the appropriate validation of the AI system.  It recommends that data are externally validated and that it is clear how those data are used.  External validation ensures the quality and safety of input data used in models, minimizing risk to patients.

4. Data Quality

Data is the bedrock of AI systems and there are huge amounts of data in healthcare.  However, the use of poor quality and biased data sets is of significant concern.  WHO recommends the use of rigorous evaluation systems pre-release to ensure data quality.  This should ensure systems do not amplify biases.

5. Privacy and Data Protection

WHO recommends that developers, deployers and manufacturers of AI technology understand the scope and requirements of relevant data protection and privacy laws.  The EU General Data Protection Regulation (“GDPR”) and the U.S. Health Insurance Portability and Accountability Act (“HIPAA”) are of particular relevance.  

6. Engagement and Collaboration

WHO recommends that developers, manufacturers, health-care practitioners, patients, policymakers, regulatory bodies and other stakeholders actively engage and collaborate with each other to improve the safety and quality of AI technologies.  It notes that many regulatory bodies (including the UK Medicines and Healthcare products Regulatory Agency, the European Commission, the U.S. Food and Drug Administration and other international regulators) are already engaging and collaborating on AI.  The WHO considers it fundamental to streamline the oversight process for AI regulation through such engagement and collaboration in order to accelerate practice-changing advances in AI.

What Does This Mean for AI in Healthcare?

The WHO’s publication intends to outline the key principles that governments and regulatory authorities should consider when developing new guidance or adapting existing guidance on AI at a national or regional level.  

The publication picks up on some key themes.  Firstly, the (potential) importance of AI in the healthcare sector.  Secondly, the need to effectively regulate AI to ensure patient safety.  And thirdly, that stakeholders continue to engage and that the community at large works towards shared understanding and mutual learning.  As such, we expect to see more international guidance and principles being published on AI in the short term. 

If you have any queries concerning the material discussed in this client alert or the use of AI in healthcare more broadly, please contact members of our Food, Drug, and Device practice.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Sarah Cowlishaw Sarah Cowlishaw

Advising clients on a broad range of life sciences matters, Sarah Cowlishaw supports innovative pharmaceutical, biotech, medical device, diagnostic and technology companies on regulatory, compliance, transactional, and legislative matters.

Sarah is a partner in London and Dublin practicing in the areas of EU…

Advising clients on a broad range of life sciences matters, Sarah Cowlishaw supports innovative pharmaceutical, biotech, medical device, diagnostic and technology companies on regulatory, compliance, transactional, and legislative matters.

Sarah is a partner in London and Dublin practicing in the areas of EU, UK and Irish life sciences law. She has particular expertise in medical devices and diagnostics, and on advising on legal issues presented by digital health technologies, helping companies navigate regulatory frameworks while balancing challenges presented by the pace of technological change over legislative developments.

Sarah is a co-chair of Covington’s multidisciplinary Digital Health Initiative, which brings together the firm’s considerable resources across the broad array of legal, regulatory, commercial, and policy issues relating to the development and exploitation of digital health products and services.

Sarah regularly advises on:

  • obligations under the EU Medical Devices Regulation and In Vitro Diagnostics Medical Devices Regulation, including associated transition issues, and UK-specific considerations caused by Brexit;
  • medical device CE and UKCA marking, quality systems, device vigilance and rules governing clinical investigations and performance evaluations of medical devices and in vitro diagnostics;
  • borderline classification determinations for software medical devices;
  • legal issues presented by digital health technologies including artificial intelligence;
  • general regulatory matters for the pharma and device industry, including borderline determinations, adverse event and other reporting obligations, manufacturing controls, and labeling and promotion;
  • the full range of agreements that span the product life-cycle in the life sciences sector, including collaborations and other strategic agreements, clinical trial agreements, and manufacturing and supply agreements; and
  • regulatory and commercial due diligence for life sciences transactions.

Sarah has been recognized as one of the UK’s Rising Stars by Law.com (2021), which lists 25 up and coming female lawyers in the UK. She was named among the Hot 100 by The Lawyer (2020) and was included in the 50 Movers & Shakers in BioBusiness 2019 for advancing legal thinking for digital health.

Sarah is also Graduate Recruitment Partner for Covington’s London office.

Photo of Ellie Handy Ellie Handy

Working with companies in the life sciences and technology sectors, Ellie Handy focuses on EU, Irish, and UK life sciences regulatory and commercial matters.

Ellie advises clients on regulatory issues including classification, biologics, orphans, paediatrics, GxP, market and data exclusivity, clinical research, labelling…

Working with companies in the life sciences and technology sectors, Ellie Handy focuses on EU, Irish, and UK life sciences regulatory and commercial matters.

Ellie advises clients on regulatory issues including classification, biologics, orphans, paediatrics, GxP, market and data exclusivity, clinical research, labelling and promotion, reporting obligations, medical devices, and digital health. Ellie also advises companies in the food, cosmetic and consumer products sectors regarding regulatory compliance and borderline issues. Ellie provides advice in relation to corporate transactions and restructuring, in particular performing regulatory due diligence.

Ellie represents and works with a wide range of clients working in the life sciences and technology sectors on both contentious and non-contentious regulatory matters.

Ellie’s pro bono work includes assisting charities. In addition to her role at Covington, Ellie spent three years working life sciences regulatory practice in London.

Tamzin Bond

Tamzin Bond is a Trainee Solicitor who attended BPP School of Law. Prior to joining the firm, Tamzin completed her Ph.D in Chemistry from Imperial College London.