To address the rapid pace of artificial intelligence (AI) development and use in healthcare, WHO released a guidance document outlining six key principles for the ethical use of AI in health. WHO’s 20 experts spent two years developing this guidance, which marks the first consensus report on AI ethics in healthcare settings. The WHO recognizes that many people are concerned about the potentially harmful effects that AI could have on human health but points out that these fears may not come to fruition if we establish robust governance frameworks early on.
WHO’s six principles for the ethical use of artificial intelligence in healthcare settings are:
(1) Protect autonomy;
(2) Promote human well-being, human safety, and the public interest;
(3) Ensure transparency, explainability, and intelligibility;
(4) Foster responsibility and accountability;
(5) Ensure inclusiveness and equity;
(6) Promote AI that is responsive and sustainable
WHO recommends that all stakeholders consider the WHO six principles to establish a framework for AI use in healthcare and make sure they are implemented early on at each stage of development.
WHO acknowledges that any new technology carries risks and uncertainties but points out that these fears may not come to fruition if we make sure to establish robust governance frameworks early on. The WHO’s recommendations include: (i) establishing an ethics committee composed of members from diverse backgrounds with knowledge about artificial intelligence; (ii) developing technical standards/guidelines as well as ethical codes of conduct; and (iii) having transparency mechanisms such as data ownership agreements or terms of service contracts between parties involved in the design, production and deployment stages.
To read the WHO guidance document on artificial intelligence in healthcare settings and learn more about WHO’s six principles, please visit:’Ethics and governance of artificial intelligence for health.’