Editor’s Note
With the rapid development of artificial intelligence technology, the health sector is constantly trying to improve medical quality and enhance work efficiency by introducing AI. Due to the ability of large model technology to process complex tasks on large-scale data, significantly enhancing the generalization, versatility, and practicality of AI, this technology has broad prospects and potential in applications such as disease prediction, diagnosis, treatment, and drug development, but it also brings many ethical challenges and risks that urgently need to be addressed.
On January 18, 2024, the World Health Organization (WHO) released the English version of the “Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models,” aimed at assisting countries in planning the benefits and challenges related to multi-modal large models in the health sector and providing policy and practical guidance for the appropriate development, provision, and use of multi-modal large models.
Given the significant strategic importance of multi-modal large model AI for our country to gain new advantages in future strategic competition and promote the health of the general public, our journal has organized experts engaged in related research fields to translate the guidelines into Chinese for researchers’ reference, in hopes of promoting the research and guidance of medical large model ethical governance in our country, achieving a positive interaction of high-quality innovative development and high-level safety.
This article was first published in CNKI, with the reference format as follows:
Wang Yue, Song Yaxin, Wang Yifei, et al. Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models [J/OL]. Chinese Medical Ethics: 1-58 [2024-03-14]. http://kns.cnki.net/kcms/detail/61.1203.R.20240304.1833.002.html.
Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models
Ethics and governance of artificial intelligence for health. Guidance on large multi-modal models. Geneva: World Health Organization; 2024. Licence: CC BY-NC-SA 3.0 IGO.
This translation is not derived from the World Health Organization (WHO), and the WHO is not responsible for the content or accuracy of the translation. The English original should be considered the authoritative text.
Original Version Number: ISBN 978-92-4-008475-9 (electronic version); ISBN 978-92-4-008476-6 (print version)
Translators
Wang Yue1, Song Yaxin1, Wang Yifei1, translators; Yu Lian2, Wang Jing3, reviewers
(1 School of Law, Xi’an Jiaotong University, Xi’an, Shaanxi 710049; 2 School of Public Health, Xi’an Jiaotong University, Xi’an, Shaanxi 710061; 3 Beijing University of Chinese Medicine, Beijing 100010)
Continuation of Previous Article
Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models
Serialized | Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models (Part II)
Serialized | Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models (Part III)
Serialized | Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models (Part IV)
Serialized | Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models (Part V)
7 Responsibilities of General Foundation Models (Multi-Modal Large Models)
As multi-modal large models are applied more widely in healthcare and pharmaceuticals, errors, misuse, and ultimately harm to individuals are inevitable. Accountability must be used to compensate for such harm to individuals, and new remedial mechanisms must be established where current methods are insufficient or outdated.
The design, development, quality assurance, and deployment of artificial intelligence technologies involve different entities, each playing different roles. This may complicate the allocation of responsibility. Developers may seek to hold downstream entities, including providers and deployers, responsible for all harm caused by the use of multi-modal large models, while downstream entities may claim that prior actions, such as the selection of data used to train algorithms, are the cause of the harm. Developers and providers may also argue that once medical AI technology is approved for use by regulatory agencies, they should no longer be held responsible for harm (preventive regulation). Establishing responsibility along the value chain is a challenge faced by legislators and policymakers.
A key function of civil liability rules is to ensure that victims of harm can seek compensation and remedy, regardless of how difficult it is to allocate duties and responsibilities among entities involved in the development and deployment of AI technology. If victims find it too difficult to obtain compensation, there is no justice, and parties in the AI value chain have no incentive to avoid similar harm in the future. The rules should also ensure that compensation is sufficient to cover the harm suffered.
The EU has introduced “causal presumption” in its proposed AI Liability Directive, simplifying the burden of proof for victims. Thus, if victims can demonstrate that one or more entities have not complied with obligations related to the harm and that there is likely a causal relationship with the performance of the AI, the court can presume that non-compliance with that obligation is the cause of the harm. Therefore, the liable party has the burden to rebut this presumption, for example, by pointing out that another party is the cause of the harm. The scope of legislation is not limited to the original manufacturers of AI systems but includes any actors in the AI value chain.
When all actors in the AI value chain are held jointly liable, they can demonstrate their effectiveness in assessing and mitigating risks to reduce liability.
However, liability mechanisms may still fail to provide complete clarity on responsibility and remedies for harm caused by AI-driven products and services, especially if individuals are unaware that multi-modal large models are being used to make medical decisions. New rules may leave gaps in liability for harm caused by AI-driven medical technologies. Given the speculative nature of multi-modal large models and the limited knowledge about them being rushed to market, governments may wish to view multi-modal large models used in healthcare as products that require developers, providers, and deployers to comply with strict liability standards. Holding these actors accountable for all errors may ensure that patients receive compensation when errors affect them, although this depends on whether patients are aware that multi-modal large models were used. While this ongoing liability may hinder the use of increasingly complex multi-modal large models, it may also reduce the willingness to take unnecessary risks and delay the deployment of new multi-modal large models in healthcare or public health environments until their many risks and potential harms are fully identified and addressed.
However, accountability for AI may be insufficient to allocate blame, as algorithms are evolving in ways that developers, providers, and deployers cannot fully control. Moreover, there may still be cases where victims cannot seek redress and jurisdictional issues arise. For example, in the US, patients injured by seeking advice directly from multi-modal large models may not be able to obtain damages because the AI system itself is not included in professional liability rules, and exceptions or limitations in product or consumer liability law may exclude compensation. In other areas of healthcare, compensation is sometimes provided in cases of uncertain fault or liability, such as medical injuries caused by adverse effects of vaccines. The WHO previously recommended determining whether a “no-fault, no-liability compensation fund” is an appropriate mechanism for compensating individuals harmed by medical injuries resulting from the use of AI technology, including how to mobilize resources to pay any claims.
This recommendation remains valid today and can serve as a means to determine compensation for damages caused by multi-modal large models or applications of multi-modal large models.
Recommendations:
lCountries should establish responsibility along the value chain of the development, supply, and deployment of multi-modal large models to ensure that victims of harm can seek compensation, regardless of the difficulties in pursuing accountability and the responsibilities of different entities involved in the technology’s development and deployment.
8 International Governance of General Foundation Models (Multi-Modal Large Models)
Countries should support collective efforts to establish international rules to govern multi-modal large models in health and other forms of AI, as the use of AI for such purposes is surging globally. The WHO’s “Global Strategy on Digital Health 2020-2025” is an example. This process should include strengthening cooperation and collaboration within the UN system to address the opportunities and challenges posed by deploying AI in health and its broader applications in social and economic domains. Unless governments work together to develop appropriate, enforceable standards, the number of multi-modal large models and other forms of AI that do not meet appropriate legal, ethical, and safety standards will increase, and without regulations and other types of protective measures, or due to voluntary or insufficient enforcement, harm may occur. The WHO recently consulted with regulatory agencies around the world to publish a new document outlining key principles that governments and regulatory agencies can follow when developing new AI guidelines or adjusting existing ones.
International governance can avoid “race to the bottom” competition among companies seeking first-mover advantages while neglecting safety and efficiency standards, as well as “race to the bottom” competition among governments seeking advantages in geopolitical competitions for technological hegemony. Thus, international governance can ensure that all companies meet minimum safety and efficiency standards and can avoid regulations that provide competitive advantages or disadvantages to companies or governments. International governance can hold governments accountable for their investments and participation in the development and deployment of AI-based systems and ensure that governments formulate appropriate regulations that respect ethical principles, human rights, and international law. The lack of globally enforceable standards may also negatively impact product adoption.
International governance can take various forms. Suggestions have been made to establish a public research institution funded by multiple governments, similar to the European Organization for Nuclear Research (CERN), to utilize funds and human resources to carry out large transformative projects and publicly share project results. In another proposal, it has been suggested to establish an entity responsible for developing the most advanced and highest-risk AI in highly secure facilities, making other attempts to build such AI illegal. Currently, such large-scale projects do not fall under the category of publicly funded public welfare projects but are the purview of large tech companies that have commercial competition relationships. Leaders, including heads of state and technology executives from around the world, have called for treating AI similarly to nuclear weapons and establishing a global regulatory framework akin to treaties on the use of nuclear weapons.
Regardless of the form of international governance, it is important that it is not solely determined by high-income countries or, rather, by high-income countries that primarily or only cooperate with the largest tech companies in the world. Standards developed and imposed by high-income countries and tech companies, whether for the general application of AI or for the specific use of multi-modal large models in healthcare and pharmaceuticals, will leave most people in low- and middle-income countries without any role or voice in standard-setting. This will make future AI technologies potentially perilous or ineffective in countries that may ultimately benefit the most.
As proposed by the UN Secretary-General in 2019, international governance of AI may require all stakeholders to cooperate through networked multilateralism, which will enable closer, more effective, and inclusive collaboration among the UN family, international financial institutions, regional organizations, trade groups, and other aspects, including civil society, cities, businesses, local authorities, and youth. Placing ethics and human rights at the core of the development and deployment of multi-modal large models can significantly contribute to achieving universal health coverage.
Recommendations:
lCountries should support collective efforts to establish international rules for AI governance. Regardless of the form of governance, it should not be solely determined by high-income countries or primarily by high-income countries that mainly or only cooperate with the largest tech companies in the world, as this practice will render most people in low- and middle-income countries unable to play a role or lose their voice in the formulation of international governance of AI.
Acknowledgements
The development of this WHO guideline was led by Andreas Reis (co-lead of the Health Ethics and Governance of the Health Research Department) and Sameer Pujari (Department of Digital Health and Innovation), with comprehensive guidance from John Reeder (Director of the Health Research Department), Alain Labrique (Director of the Department of Digital Health and Innovation), and Jeremy Farrar (Chief Scientist).
Rohit Malpani (Advisor, France) was the principal author. Co-chairs of the WHO Expert Group on AI Ethics and Governance for Health Effy Vayena (ETH Zurich, Switzerland) and Partha Majumder (Indian Statistical Institute and National Institute of Biomedical Genomics, India) provided comprehensive guidance for the drafting of the report and led the work of the expert group.
The WHO thanks the following individuals for their contributions to the development of this guideline.
Members of the WHO Expert Group on AI Ethics and Governance for Health
Najeeb Al Shorbaji, Electronic Health Development Association, Amman, Jordan; Maria Paz Canales, Global Partners Digital Organization, Santiago, Chile; Arisa Ema, University of Tokyo, Japan; Amel Ghouila, Bill and Melinda Gates Foundation, Seattle, USA; Jennifer Gibson, WHO Collaborating Centre for Bioethics, University of Toronto, Canada; Kenneth Goodman, Institute for Bioethics and Health Policy, University of Miami, USA; Malavika Jayaram, Digital Asia Centre, Singapore; Daudi Jjingo, Makerere University, Kampala, Uganda; Tze Yun Leong, National University of Singapore; Alex John London, Carnegie Mellon University, Pittsburgh, USA; Partha Majumder, Indian Statistical Institute and National Institute of Biomedical Genomics, Kolkata, India; Thilidzi Marwala, University of Johannesburg, South Africa; Roli Mathur, Indian Council of Medical Research, Bangalore, India; Timo Minssen, University of Copenhagen, Denmark; Andrew Morris, London Medical Data Research, UK; Daniela Paolotti, ISI Foundation, Turin, Italy; Jerome Singh, University of KwaZulu-Natal, Durban, South Africa; Jeroen van den Hoven, Delft University of Technology, Netherlands; Effy Vayena, ETH Zurich, Switzerland; Robyn Whittaker, University of Auckland, New Zealand; Zeng Yi, Chinese Academy of Sciences, Beijing, China.
Observers
David Gruson, Luminess, Paris, France; Lee Hibbard, Council of Europe, Strasbourg, France.
External Reviewers
Oren Asman, Tel Aviv University, Israel; I. Glenn Cohen, Harvard Law School, Boston, USA; Alexandrine Pirlot de Corbion, Privacy International, London, UK; Rodrigo Lins, Federal University of Pernambuco, Recife, Brazil; Doug McNair, Deputy Director of Integrated Development at the Bill and Melinda Gates Foundation, Seattle, USA; Keymanthri Moodley, Stellenbosch University, Cape Town, South Africa; Amir Tal, Tel Aviv University, Israel; Tom West, Privacy International, London, UK.
External Contributors
The boxes in the guideline, Box 2 (Ethical Considerations for Children Using Multi-Modal Large Models) were drafted by Vijaytha Muralidharan, Alyssa Burgart, Roxana Daneshjou, and Sherri Rose from Stanford University, USA. Box 3 (Ethical Considerations Related to Multi-Modal Large Models and Their Impact on Disabled Persons) was drafted by independent consultant Yonah Welker from Geneva, Switzerland.
All external reviewers, experts, and authors have declared their interests in accordance with WHO policies. Upon assessment, the declared interests were found to be not significant.
World Health Organization
Shada Al-Salamah, Technical Officer, Digital Health and Innovation, Geneva; Mariam Otmani Del Barrio, Scientist, Special Planning Project on Tropical Diseases, Geneva; Marcelo D’Agostino, Head of Information Systems and Digital Health at the WHO Regional Office for the Americas, Washington; Jeremy Farrar, Chief Scientist, Geneva; Clayton Hamilton, Technical Officer, WHO Regional Office for Europe, Copenhagen; Kanika Kalra, Consultant, Digital Health and Innovation, Geneva; Ahmed Mohamed Amin Mandil, Coordinator of Research and Innovation at the WHO Regional Office for the Eastern Mediterranean, Cairo; Issa T. Matta, Legal Affairs Office, Geneva; Jose Eduardo Diaz Mendoza, Consultant, Digital Health and Innovation, Geneva; Mohammed Hassan Nour, Technical Officer, Digital Health and Innovation at the WHO Regional Office for the Eastern Mediterranean, Cairo; Denise Schalet, Technical Officer, Digital Health and Innovation, Geneva; Yu Zhao, Technical Officer, Digital Health and Innovation, Geneva.
[Completed]
Editor: Shang Dan
Reviewer: Ji Pengcheng