In AI development, equality starts with a mandate for fairness and inclusivity

In AI development, equality starts with a mandate for fairness and inclusivity

What are the most important ethical considerations for artificial intelligence (AI) in health care?

The World Health Organization tried to answer this question in its recent report “Ethics and Governance of Artificial Intelligence for Health.” It offers recommendations on how to design safe, transparent, and equitable AI products and applications that can help providers make informed medical decisions and help patients achieve positive outcomes. The report’s recommendations include:

  • Humans should remain in control of health care systems and medical decisions.
  • AI products should be required to meet standards for safety, accuracy, and effectiveness within well-defined use cases.
  • AI developers should be transparent about how products are designed and function before they’re used.
  • Health care businesses that rely on AI should ensure they are used under appropriate conditions by trained personnel.
  • AI must be designed to encourage inclusiveness and equality.

Read more