COPENHAGEN: The rising use of synthetic intelligence in healthcare necessitates stronger authorized and moral safeguards to guard sufferers and healthcare staff, the World Well being Organisation’s Europe department mentioned in a report revealed Wednesday.
That’s the conclusion of a report on AI adoption and regulation in healthcare programs in Europe, primarily based on responses from 50 of the 53 member states within the WHO’s European area, which incorporates Central Asia.
Solely 4 nations, or 8%, have adopted a devoted nationwide AI well being technique, and 7 others are within the technique of doing so, the report mentioned.
“We stand at a fork within the street,” Natasha Azzopardi-Muscat, the WHO Europe’s director of well being programs, mentioned in an announcement.
“Both AI might be used to enhance folks’s well being and well-being, cut back the burden on our exhausted well being staff and convey down healthcare prices, or it may undermine affected person security, compromise privateness and entrench inequalities in care,” she mentioned.
Nearly two-thirds of nations within the area are already utilizing AI-assisted diagnostics, particularly in imaging and detection, whereas half of nations have launched AI chatbots for affected person engagement and help.
The WHO urged its member states to handle “potential dangers” related to AI, together with “biased or low-quality outputs, automation bias, erosion of clinician abilities, lowered clinician-patient interplay and inequitable outcomes for marginalised populations”.
Regulation is struggling to maintain tempo with know-how, the WHO Europe mentioned, noting that 86% of member states mentioned authorized uncertainty was the first barrier to AI adoption.
“With out clear authorized requirements, clinicians could also be reluctant to depend on AI instruments and sufferers might haven’t any clear path for recourse if one thing goes mistaken,” mentioned David Novillo Ortiz, the WHO’s regional advisor on knowledge, synthetic intelligence and digital well being.
The WHO Europe mentioned nations ought to make clear accountability, set up redress mechanisms for hurt, and be sure that AI programs “are examined for security, equity and real-world effectiveness earlier than they attain sufferers”.

