-
Healthcare
-

WHO Releases New Guidance on Regulating AI in Healthcare

By
Adina Pullman for Distilled Post

The World Health Organization (WHO) has recently released new comprehensive guidance on regulating artificial intelligence (AI) technologies for healthcare. This highly-anticipated publication outlines "key regulatory considerations on artificial intelligence (AI) for health" and aims to support countries in establishing appropriate regulations to responsibly manage the risks and harness the significant benefits AI promises for transforming healthcare.

According to the WHO, "AI holds great promise for health, but also comes with serious challenges that must be addressed." WHO Director-General Dr Tedros Adhanom Ghebreyesus emphasised that "Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation." He said the new guidance will be critical to help countries "regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks."

Six key areas for regulation

The extensive WHO publication highlights six key areas that comprehensive AI healthcare regulations must address: transparency, risk management, safety, quality, privacy/data protection, and collaboration. On the issue of transparency, the guidance stresses the importance of "documenting the entire product lifecycle" of AI systems to build public trust. For risk management, it states key issues like training models, human oversight, and cybersecurity must be thoroughly addressed. Committing to using high-quality, unbiased datasets is also vital to avoid amplifying errors and biases.

Navigating complex existing regulations like the EU's General Data Protection Regulation (GDPR) and the US Health Insurance Portability and Accountability Act (HIPAA) is also covered to ensure privacy and data protection. Fostering collaboration between diverse stakeholders including regulators, patients, healthcare professionals, industry and government is encouraged to help ensure regulatory compliance throughout an AI product's lifecycle.

Managing risks of biased data

According to the WHO, "AI systems are complex and depend not only on the code they are built with but also on the data they are trained on." The organisation warns that better regulations are needed to help manage risks like biases being amplified from problematic training data. The WHO guidance aims to outline core principles and best practices that governments globally can follow and adapt to develop or update AI regulations for the healthcare sector specifically, whether "at national or regional levels."

Guidance to balance innovation with oversight

With AI technologies advancing rapidly and being deployed in healthcare settings in many countries before full regulations are in place, the WHO stresses this new guidance is both timely and urgently needed. The recommendations aim to help balance innovation with responsible oversight, to ensure these transformative technologies benefit patients and doctors while protecting safety, privacy and human rights. If adopted widely, these guidelines could play a key role in fostering the ethical and effective integration of AI into healthcare worldwide.