[author]Zhao Jingwu
[content]
Systematic construction of the ethics review system for artificial intelligence technology
*Author Zhao Jingwu
Associate Professor, School of Law, Beihang University
Deputy Director, Research Base, Beijing Science and Technology Innovation Center
Doctor of Law
Abstract: At present, there are practical problems such as formalization and virtualization in the practice of science and technology ethics review in my country, and the inherent "soft governance" characteristics of science and technology ethics governance also make it difficult for the system to achieve the expected institutional goals. In practice, the risk of ethical loss of control is mainly manifested in the difficulty of the existing science and technology ethics review system to effectively control the science and technology ethics risks of artificial intelligence. The root cause of this phenomenon is the failure to clarify the functional positioning of the science and technology ethics review system of artificial intelligence, resulting in ambiguity in the scope of review, review standards, and review procedures. Although this review system belongs to a typical risk governance paradigm, it is based on non-mandatory social ethical norms. The review results belong to the judgment of the ethical and moral level, and its nature belongs to the rationality review of scientific and technological innovation activities. Based on this positioning, the positioning differences between the Science and Technology Ethics Committee and the Science and Technology Ethics Review Committee should be appropriately distinguished. Only when it involves sensitive research fields, the subject of the review is required to form a science and technology ethics review committee, and the review results and security risk assessment results are used as the risk basis for taking specific governance measures.
Keywords: Science and Technology Ethics Review; Science and Technology Ethics Risk; Reasonable Review
1 Problem Statement: Risk of Loss of Control over the Ethics of Artificial Intelligence
After the "carrot run" in Wuhan, the problem of AI governance is not only about how to ensure the safety and reliability of technology applications, but also about the controversy over science and technology ethics such as how autonomous driving makes a choice of interests when encountering an emergency. Abroad, a 14-year-old boy committed suicide because he was overly obsessed with an AI virtual companion, which was also called the "first AI chat death case" by foreign media. This type of science and technology ethics risk threatens the status of human subjects, personal dignity and the existing moral norms system. In particular, the application of artificial intelligence technology in the employment market, social credit assessment, insurance and other fields will inevitably impact basic moral concepts such as social justice.
The issue of science and technology ethics risks has long been a concern in the field of biomedicine, and has formed the basic framework of the science and technology ethics review system. However, the leapfrog development of artificial intelligence will inevitably have an impact on the existing science and technology ethics review system: on the one hand, the review target has changed. The science and technology ethics review system in the traditional biomedical field is aimed at the more urgent protection of human life and health, while the science and technology ethics review in the field of artificial intelligence is more inclined to scientific and technological innovation and the rationality of technology application. On the other hand, the existing review model is difficult to continue to use. The science and technology ethics review system in the traditional biomedical field is based on mandatory review, while the science and technology ethics review in the field of artificial intelligence obviously cannot adopt mandatory review for all matters involving ethical risks, otherwise it will become a huge institutional obstacle to scientific and technological innovation. Objectively speaking, the science and technology ethics risks of innovative applications of artificial intelligence belong to the risk of "out of control" in the process of scientific and technological innovation, so it is necessary to impose necessary restrictions through science and technology ethics review, and then coordinate the two different governance goals of promoting scientific and technological innovation and controlling science and technology ethics risks.
In fact, the science and technology ethics review system is an imported system, and its original purpose is to terminate various unethical and unreasonable scientific research activities in life and medicine. For example, the Nuremberg Code and the Declaration of Helsinki are aimed at terminating scientific experiments with humans as experimental subjects; the National Institutes of Health (NIH) of the United States issued scientific research guidelines to ensure that scientific experiments treat animals well. The establishment of my country's science and technology ethics review system can be traced back to the "Good Clinical Practice for Drugs", the purpose of which is also to ensure the rights and safety of subjects; and subsequent normative documents such as the "Ethical Review Measures for Biomedical Research Involving Humans" and the "Regulations on the Management of Human Genetic Resources" are also centered on life and medical science experiments. When this science and technology ethics review system based on ethical requirements such as life safety and personal dignity is extended to other technical fields such as artificial intelligence, it is necessary to reinterpret the functional positioning of the science and technology ethics review system to ensure that science and technology ethics review follows a common governance logic in the fields of life sciences, medicine, artificial intelligence, etc.
From a practical point of view, the science and technology ethics review system has failed to fully realize the ideal effect of "legal and ethical collaborative governance" advocated by supporters, but instead presents the problem of formalization and process of science and technology ethics review. The more difficult problem is that the operation mechanism of the science and technology ethics committee faces practical obstacles such as how the committee operates on a daily basis, how to clarify the review standards, how to standardize the review process, and how to pay experts' salaries. Although many scholars have put forward many suggestions for improving the system, related research still lacks theoretical analysis and empirical investigation of the review mechanism. In summary, the core problem facing the science and technology ethics review of artificial intelligence is that the institutional model and implementation model of this type of ethical review cannot be distinguished from the traditional system in the biomedical field, making it difficult to directly apply it to the field of artificial intelligence technology governance. The "incompleteness" of this system content is mainly manifested in the ambiguity and generality of the three aspects of the scope of review, the review standards, and the mode of action of the review system.
2 Functions and effects of ethical review of artificial intelligence technology
Although Article 15, Item 5 of the "Measures for the Review of Science and Technology Ethics (Trial)" clearly states that "scientific and technological activities involving data and algorithms" are the key review content, as a "general law" science and technology ethics review rule, it fails to strictly distinguish the differences in science and technology ethics review models in different technical fields. Therefore, before clarifying the construction model of my country's artificial intelligence science and technology ethics review system, it is necessary to re-evaluate and clarify the core reasons for the failure of artificial intelligence science and technology ethics review function.
2.1 The current status of the practice of my country's science and technology ethics review system: a state of virtualization
The actual operation of the artificial intelligence science and technology ethics review system is not good. The main reason is that the science and technology ethics review system is generally virtualized during its implementation. Some review agencies are perfunctory in the review process, which makes it difficult to achieve the preset goals of ethical review.
First, the operation mechanism of the science and technology ethics review committee and its own legal nature are always in doubt. Although Article 4 of the "Science and Technology Ethics Review Measures (Trial)" stipulates that scientific research units should be equipped with necessary staff, provide office space, funds and other conditions to ensure the independence of the committee, this can only ensure formal independence. First, Article 4 stipulates that units engaged in scientific and technological activities such as life sciences, medicine, and artificial intelligence should establish a science and technology ethics (review) committee, and the premise of establishment is "involving sensitive areas of science and technology ethics", but it is essentially a self-restrained ethics review mechanism. Even if the committee is composed of external experts and a recusation mechanism, it is doubtful to what extent the review committee can achieve independent review and independent decision-making. After all, the material support conditions such as the payment of members' salaries and the provision of venues are still provided by the reviewed units. This is also what some researchers have always emphasized, that the ethics review committee is prone to the problem of "being both a referee and an athlete". According to statistics from some scholars, the ethics review committees of most scientific and technological institutions have repeatedly "directly stamped their approval and reply" during the review, making the ethics review system present the characteristics of "going through the motions".
Second, there is no consensus on how to improve the governance effect of the "weak constraint" effect of science and technology ethics review through the rule of law. Articles 1 and 9 of the "Measures for the Review of Science and Technology Ethics (Trial)" stipulate that the assessment and prevention of science and technology ethics risks are the legislative goals of science and technology ethics review, but compared with foreign models such as the United States and Europe, my country's institutional model is actually a "weak ethics review system", which will lead to problems such as excessively high scientific research pass rates. Even tracing back the development of my country's science and technology ethics review system, as summarized by some scholars, the system has been controversial since its inception: supporters believe that scientific research cannot be equated with pure scientific research freedom, and a certain degree of external constraints should be adopted; opponents believe that this will undoubtedly increase the economic cost of scientific research and innovation activities, and the so-called ethics review committee is more likely to become "bureaucratic-dominated formalism." The basic framework of ethical regulation is mainly composed of industry norms, a small number of abstract principles and self-regulation at the institutional level, which also determines that ethical regulation has a typical "weak constraint" effect. Whether it is industry norms that lack mandatory enforcement power, or vague ethical principles, or self-discipline dominated by self-discipline, it is difficult to effectively achieve the governance goal of preventing and controlling scientific and technological ethical risks.
Third, whether the formalization of scientific and technological ethics review can be solved through legal liability is worth discussing. Articles 47, 48 and 49 of the "Measures for the Review of Scientific and Technological Ethics (Trial)" stipulate the ways of assuming civil liability, administrative liability and criminal liability. However, there are often strict application conditions for the assumption of these legal responsibilities. In terms of civil liability, Articles 47 and 48 stipulate that "causing property loss or other damage" shall bear civil liability, which is actually a typical tort. Even if it does not go through scientific and technological ethics review, if it causes property loss or other damage, the right holder can also claim civil liability in accordance with the "Civil Code of the People's Republic of China". In terms of administrative liability, the situations stipulated in Article 48, such as fraud and favoritism, are essentially acts of substantial violation of requirements, but cannot be directly applied to formalized science and technology ethics review activities. Compared with problems such as no review and false review, formal review is more likely to weaken the actual effect of the science and technology ethics review mechanism. Since the "Science and Technology Ethics Review Measures (Trial)" itself is a trial measure and is essentially a general provision, the problem of formalized ethics review may require the subsequent establishment of science and technology ethics review rules specific to the fields of life sciences, medicine, artificial intelligence, etc. to solve it.
2.2 The particularity of AI technology ethics review
The most direct impact of AI technology innovation on the ethics review system is the alienation of the review scope, review standards and review methods. Too wide or too narrow a review scope will make technology ethics review an "institutional burden" for AI technology innovation. In particular, as a kind of social ethical norm, technology ethics itself lacks mandatory nature, and the review system is more likely to be placed in the embarrassing position of "going through the motions".
In terms of the scope of review, there is a misunderstanding that legal risks are confused with technology ethics risks. In the view of some scholars, the ethical issues brought about by the application of AI technology include rising unemployment, ethical risks of autonomous driving system decision-making, and challenges to human beings from AI subject consciousness, etc., because of the ambiguity of governance responsibilities and unfair governance results. At the same time, some scholars integrate the manifestation of emerging technology ethical risks with legal risks and technical security issues, and summarize them as technical security issues, infringement of personal rights and interests, social equity issues, and threats to the ecosystem. These views are not unreasonable, but the problem is that if there are problems such as infringement of personal information rights and interests and unclear legal responsibilities, they can be completely resolved by applying legal norms. Why set up a redundant technology ethics review system for repeated evaluation? This tendency to ethicalize legal issues also makes the science and technology ethics review system itself regarded as an ethical normative mechanism in a purely moral sense. Therefore, it needs to be clarified that although science and technology ethics inevitably involve the protection of personal rights and interests, the "personal rights and interests" here are more of an abstract legal interest. From the perspective of the intersection of ethics and law, there is doubt about what kind of legal interests are infringed by the application of artificial intelligence technology. Take the use of artificial intelligence technology to revive the dead in practice as an example: from a legal perspective, without the consent of the next of kin, such application may constitute an infringement of specific rights such as personal information and portrait rights; from an ethical perspective, without the consent of the next of kin, such application damages the next of kin's emotional interests such as remembrance of the deceased and not wanting to be disturbed. Whether it is the science and technology ethics in the field of life medicine or the science and technology ethics in the field of artificial intelligence, both emphasize the maintenance of the status of human subjects. Scientific and technological innovation activities should not treat humans as the object of technical discipline. The purpose of the existence of science and technology ethics is to avoid the situation where excessive pursuit of scientific research freedom ignores the dignity of human subjects. Article 990 of the Civil Code of the People's Republic of China is a referral clause to "respect and protect human rights" in Article 33, Paragraph 3 of the Constitution of the People's Republic of China. The "personal dignity" it stipulates has typical ethical significance, which can be an important factor in the relative separation of science and technology ethics and legal norms. Therefore, if guided by science and technology ethics review, the basic category of artificial intelligence science and technology ethics can be defined as ethical norms involving scientific and technological innovation that should safeguard the status of human subjects and personal dignity.
In terms of review standards, science and technology ethics itself is not a concept with a clear connotation and extension, and it is difficult to unify the ethical review standards followed by different review subjects. Judging from the current status of differences in science and technology ethics standards at home and abroad, the standardization of science and technology ethics seems to be an unsolvable problem: science and technology ethics itself is an uncertain legal concept, which is affected by many factors such as social practice, scientific and technological development, and national and ethnic culture. It will also generate new connotations with the development of the times, and science and technology ethics cannot be standardized at all. At the same time, even if the legislature clarifies the unified science and technology ethics standards and solves the problem of the qualifications of the formulation subject, the ethical standards adopted by the legislature are still difficult to completely avoid being affected by the subjective opinions of experts participating in the legislative process. However, this unsolvable problem does not mean that it can only be temporarily shelved, but it itself has a logical paradox: since science and technology ethics belongs to the category of human ethics, and human ethics itself has never been standardized, how can science and technology ethics have objective standards? Furthermore, since this unsolvable problem itself is a wrong proposition, how should the ethical judgment standards that science and technology ethics review relies on be determined? Returning to the level of science and technology ethics governance goals, whether it is science and technology ethics review or other science and technology ethics governance mechanisms, their core purpose is to solve the problem of science and technology ethics risks. The inherent logic of this risk governance is not to prevent and control risk events through ethical judgment standards in a strict sense, but to focus on whether potential science and technology ethics risks can be identified and necessary preventive measures can be taken. In other words, even if the ethical review judgment standards adopted are different, as long as these ethical review standards belong to the basic category of science and technology ethics, there is no substantial difference in the final governance effect.
In terms of the review mechanism, similar claims such as "building institutional norms related to science and technology ethics to achieve corresponding governance goals" confuse the mechanism of legal governance and ethical governance. Legal governance acts on scientific research and innovation through mandatory clauses, while science and technology ethics review acts on the ethical concepts of scientific researchers and is internalized as a self-discipline for scientific and technological innovation activities. The practice of the two complementing each other in the governance system should be the governance logic connection of "subject and behavior" rather than the dual constraints of "behavior and behavior". Accordingly, based on this mechanism of action, the functional positioning of the science and technology ethics review mechanism will also change accordingly. Specifically, science and technology ethics review is mainly aimed at scientific and technological innovation, research and development, and application, ensuring that the design, production and sales of related technical services and products comply with science and technology ethics; science and technology ethics training for scientific researchers is integrated into the educational experience and work experience of scientific and technological workers, and promotes scientific researchers to form ethical concepts of respecting personal dignity; science and technology ethics consultation is to supplement the ambiguity of science and technology ethics, and promote the formation of science and technology ethics concepts that conform to social consensus in the form of consultation through social public network comments, scientific research experts' technical explanations, etc. Therefore, in the process of institutional construction, it is not appropriate to understand the mechanism of science and technology ethics review as the application of science and technology ethics norms in a broad sense.

