[author]LI Xueyao
[content]
Planning Committee Member of Shanghai Jiao Tong University Institute of Chinese Law and Society
1. Seeking enlightenment from the biomedical ethics system
Article 7 of the "Regulations on the Administration of Algorithm Recommendation for Internet Information Services" promulgated by the Cyberspace Administration of China in 2022 is the beginning of the construction of an AI ethics review system in the sense of legislation in my country. In July 2023, the Cyberspace Administration of China and seven other ministries and commissions promulgated the "Interim Measures for the Administration of Generative AI Services" (hereinafter referred to as the "Interim Measures"), which cites the relevant principles of AI ethics in the sense of substantive law rules. Subsequently, the Ministry of Science and Technology and other units also promulgated the "Measures for the Review of Science and Technology Ethics (Trial)" (hereinafter referred to as the "Ethical Review Measures"), which is a normative document, and listed several situations in which AI ethics review must be carried out in the appendix "List of Science and Technology Activities that Require Ethical Review and Review".
Nowadays, for regulatory departments at all levels and compliance entities, how to transform AI ethics from a moral principle framework into an operational, predictable, and calculable ethical compliance practice is a practical problem that needs to be solved urgently. Recently, relevant central departments are soliciting opinions from all walks of life through various means on the issue of formulating special AI ethics review regulations. So, how should my country build an AI ethics review system under the idea of "operable, calculable, and predictable"? Given that biomedical ethics and its ethical review practices are relatively mature, they can be used as an important reference for relevant discussions. Generally speaking, the discussion can be carried out from four aspects: First, what similarities do the two have. Second, what are the differences between the two and what characteristics do artificial intelligence ethics have. Third, what lessons should be learned from the institutional practice of biomedical ethics and its theoretical discussions. Fourth, what aspects of the institutional tradition of biomedical ethics can be directly inherited and learned by artificial intelligence ethics practice.
2. The continuity between artificial intelligence ethics and biomedical ethics
There has been controversy in the applied ethics community over whether the development of artificial intelligence technology has raised new and unique ethical issues, or whether artificial intelligence ethics simply repeats the same ethical dilemmas encountered in more mature fields, such as biomedicine? In this regard, most scholars who have studied biomedical ethics will argue that there is a high degree of consistency or continuity between the two, and believe that they should be integrated into the theoretical system of applied ethics.
If the four principles of biomedical ethics, dignity, benefit, harmlessness, and justice, are compared with the substantive principles of artificial intelligence ethics listed in legislation and policy documents of various countries and academia, it can indeed be concluded that there is a strong degree of symmetry and consistency between the basic principles of artificial intelligence ethics and biomedical ethics. (See Table 1) In other words, within the legal framework, analogies and other methods can be used to derive the relevant principles and rules of the artificial intelligence ethics review system from the substantive rules of biomedical ethics review.
3. Differences between AI ethics and biomedical ethics
The above article reflects on the idea of over-emphasizing the uniqueness of AI ethics. However, in the practice of ethical review, we also need to be wary of the conservative idea of completely regarding biomedical ethics and AI ethics as one and simply copying the practice of biomedical ethics review in AI ethical compliance practice. The author has used the conceptual tool of "the essence of things" to preliminarily summarize the three restrictive conditions that distinguish AI ethics from biomedical ethics due to its technical characteristics: the technical embeddability of moral rules, stronger scenario-based nature, and procedural dependence on technical processes. Here, we continue to discuss along the theoretical extension line in the article:
First, the difference between the singleness and "generalization" of the ethical responsibility subject in the compliance subject. In addition to developers and providers, users of AI will become the focus of ethical challenges; and the scope of the latter can be understood as all natural persons and non-natural persons in the world. On the one hand, the use of AI ethics by different subjects can exponentially improve the operating efficiency of the entire society, but on the other hand, the ethical disputes it causes are complicated and it is unimaginable to exclude it from the scope of ethical review. In other words, compared with biomedical ethics, artificial intelligence is applicable to almost any human activity, which provides countless possibilities for its use and determines that it cannot be simply reduced to the professional ethics of researchers or doctors like biomedical ethics. It will challenge the ethical and moral rules that have been formed by humans in all fields, and all users may become potential ethical challengers.
Of course, this will lead to two discussions: First, the ethical conflicts in the use stage of artificial intelligence may eventually require or may be achieved through legislative procedures or judicial review, but considering the complexity, uncertainty and efficiency improvement needs of specific application scenarios, various forms of legal authorization can be used to authorize the obligation of ethical review to platform-type and public management-type users, and most of the obligations to resolve ethical dilemmas can be assigned to the ethical review committee established by the self-regulatory subject in accordance with the law. Second, it is necessary to distinguish different types of ethical dilemma scenarios caused by technological innovation. The ethical debate in the movie "Dying to Survive" was triggered by whether generic drugs can be used and how to use them, but the root cause of its dilemma was the value conflict between administrative order maintenance and curing diseases and saving lives, rather than how this technology affects individual and human autonomy.
Second, the difference in the emphasis of principles and rules in compliance content. Taking the principle of autonomy as an example, there is no doubt that the autonomy of human collectives and individuals is the key to the ethical principles of artificial intelligence. The first is autonomy in the sense of individual freedom, including the protection of privacy and property rights; the second is autonomy in the sense of humanity as a whole, including that highly autonomous artificial intelligence systems should be designed to ensure that their goals and behaviors are consistent with human values throughout the operation process. Although traditional biomedical technology research and development and its application, such as lethal biological weapons, may also cause the danger of the entire human race being destroyed or controlled by other non-humans, the ethical challenges embodied in the possibility that artificial intelligence systems may take over and threaten the autonomy of the entire human race are not the focus of traditional biomedical ethics.
Take the principle of justice as an example. Since artificial intelligence systems deeply intervene and gradually replace human decision-making, they are essentially different from biomedical technology itself, which is only a decision-making object and an object of benefit distribution. Therefore, the research and development of biomedical technology involves distributive justice at most, and it is generally considered in the redistribution stage, not the object that needs to be considered in the research and development stage. However, the operation of artificial intelligence systems itself will dissolve or alienate the existing legal procedure system. Compared with all the research objects dealt with in the field of applied ethics in the past, the application of artificial intelligence and its transformative impact on society are more extensive and far-reaching. Therefore, every research, development, deployment and use of cutting-edge artificial intelligence technology needs to be self-examined from the perspective of human justice or whether the existing social order of mankind can be continued.
Third, the distinction between "immoral" and "moral" in compliance objects. The biggest difference between the review objects of artificial intelligence ethics and biomedical ethics lies in the requirements of "immorality" and "morality". Recently, the substantive content of the international artificial intelligence principles has gradually been condensed and elaborated as the "trust principle", and "trust" can actually be paraphrased as the expression of "morality".
In the biomedical field, biomedical technologies and products in the sense of non-AI technologies have distinct "non-moral" characteristics, so the "technical neutrality principle" can be applied relatively smoothly in legal compliance. The difference between AI and traditional biomedical research and development and applications is that its products and services are not just hardware equipment, but are usually programs and applications in extremely complex systems, such as health and welfare data generation systems and tax systems. Therefore, it is obviously difficult to simply apply the principle of technological neutrality to AI technologies and products, especially generative AI systems.
4.Key points for establishing an AI ethics review system
Through the above analysis, the legislative ideas of my country's artificial intelligence ethics review system can be developed from the following aspects.
First, the legislative model. Through the previous analysis, it is necessary to distinguish the legislation of artificial intelligence ethics from the legislation of biomedical ethics. The artificial intelligence ethics review system should be legislated separately within the framework of science and technology ethics governance, and make normative requirements different from biomedical ethics in terms of the responsible subject of ethics review, the conditions for initiating review, the composition of experts, the legal effect of the review conclusions and the rules of procedure.
Specifically, it is necessary to draft a parallel administrative regulation or normative document specifically for artificial intelligence ethics in the current ethics review system, in addition to the "Ethics Review Measures" which is mainly based on the biomedical ethics review scenario. At present, there is a legislative idea that the "Ethics Review Measures" is used as the superior law of the relevant artificial intelligence ethics review system. Since the existing "science and technology ethics" and its related ethics review systems (such as the composition of the committee) have inherited the "biomedical ethics review" system, in order to prevent the unknowable inertial thinking brought about by the latter's review ideas, it is very necessary to draft a normative document or administrative regulation that is not restricted by the "Ethics Review Measures" as soon as possible in the short term. Considering that the practicality of AI ethics principles is related to the competitive advantage of China's AI technology and industry, and involves a large number of procedural and authorization norms, it is necessary to draft a single "AI Ethics Law" in the short and medium term, and draft it as a special chapter of the general "AI Law" when the time is right.
Second, the content of specific clauses. Given the embedded, scenario-based and procedural characteristics of AI ethics, it requires a bottom-up and evolutionary rule generation model. Therefore, in specific ethical judgment scenarios, it is extremely important for engineers to communicate effectively, confront, reach consensus and finally transform into behavior with ethics experts and legal experts. Therefore, in the design of ethical review-related clauses, it is not appropriate to put too many clause design resources on the content design of substantive ethical principles, but should focus on the procedural perspective and the design of three clauses:
First, the establishment of the ethics review committee, the content of responsibilities and the allocation of responsibilities. On the one hand, unlike biomedical technology, the main force of AI technology research and development has gradually turned into enterprises, rather than universities and research institutions. Therefore, the competent authorities of the ethics review committee should be transformed from the competent authorities of scientific research institutions and universities to the industrial and information departments. On the other hand, from the perspective of the reasonable distribution of power such as expert power and administrative power, more authorization clauses should be designed. At the same time, it is also necessary to clarify the gatekeeper responsibilities of "platform development and application companies" in the process of AI ethics governance, and to confirm the authorization of the relevant rules they have created.
Second, it is about the composition of the ethics review committee. In order to better form the "opposite setting", the committee should focus on the selection, composition, qualifications and dynamic adjustment of its members. Among them, considering the AI ethics review goal of "ethical AI" and its technical feasibility, in the ethics review process, attention should be paid to increasing the proportion of AI technology experts. AI ethics review still has the characteristics of "public moral intuition", but it can be directly implemented through technical means, which should be reflected in the qualifications and composition of the ethics review committee.
Third, it is about the rules of procedure of the ethics review committee. Attention should be paid to the design of the review process and decision-making process. Related to the second point, in view of the technical requirements of the embeddability of AI ethics, a discussion process suitable for technology can also be set in the design of the review process to prevent the ethical review process from running in vain and finally becoming a tool for "public moral intuition to limit the progress of AI technology".
Fourth, it is about the conditions for the initiation of the AI ethics review process. The focus of AI ethics review should not be placed only on the research and development stage, and the responsible parties should not be identified only as developers and providers. Instead, the responsibility for ethical review should be allocated more to users, especially industry users or platform users of decision-making AI systems. This shift in the focus of ethical review is caused by the characteristics of AI technology and is also a need to optimize the allocation of responsibilities to better promote the development of AI technology and its industry.