The rapid growth of generative AI in health care has prompted the agency to set out guidelines for ethical use.
The introduction of health-care technologies based on artificial intelligence (AI) could be “dangerous” for people in lower-income countries, the World Health Organization (WHO) has warned.
The organization, which today issued a report describing new guidelines on large multi-modal models (LMMs), says it is essential that uses of the developing technology are not shaped only by technology companies and those in wealthy countries. If models aren’t trained on data from people in under-resourced places, those populations might be poorly served by the algorithms, the agency says.
“The very last thing that we want to see happen as part of this leap forward with technology is the propagation or amplification of inequities and biases in the social fabric of countries around the world,” Alain Labrique, the WHO’s director for digital health and innovation, said at a media briefing today.
Overtaken by events
The WHO issued its first guidelines on AI in health care in 2021. But the organization was prompted to update them less than three years later by the rise in the power and availability of LMMs. Also called generative AI, these models, including the one that powers the popular ChatGPT chatbot, process and produce text, videos, and images.
LMMs have been “adopted faster than any consumer application in history” the WHO says. Health care is a popular target. Models can produce clinical notes, fill in forms and help doctors to diagnose and treat patients. Several companies and health-care providers are developing specific AI tools.