Location : Home > Resource > Report
Resource
Li Xueyao | The Legal Nature of Artificial Intelligence Ethics
2024-11-04 [author] Li Xueyao preview:

[author]Li Xueyao

[content]



The Legal Nature of Artificial Intelligence Ethics



Li Xueyao

Professor at the KoGuan Law School, Shanghai Jiao Tong University

Planning Committee Member of the China Institute for Socio-Legal Studies, Shanghai Jiao Tong University*


Abstract: Transforming artificial intelligence ethics from moral principles into operational, predictable, and computable ethical compliance practices requires an exploration of the legal nature of artificial intelligence ethics, especially how the legal system evaluates and incorporates artificial intelligence ethical norms. The traditional socio-legal theories once used to reflect on norm theory can be analyzed with the analytical tools of "juridification" and "essence of things". First, compared with the generalized concepts of "rule of law" and "standardization", "juridification" is used to analyze the practical issues of artificial intelligence ethics, which has better theoretical convergence. The existing paths of juridification of technology ethics mainly include three: the new rights justification often used by deontologists (such as the right to personality), the soft law often used by consequentialists (such as government regulatory tool innovation), and the community ethics often used by virtue theorists (such as professional ethics). Second, legislators or "legal discoverers", in the process of trying to derive normative ethical requirements from the characteristics of artificial intelligence technology and its industry, can incorporate the concept of the essence of things, and by comparing with biomedical ethics, abstract three binding elements of the juridification of artificial intelligence ethics: the technical embeddability of moral rules, stronger scenario, and procedure dependence on the technical process.

Keywords: Artificial Intelligence Ethics; Technology Ethics; Ethical Review; Juridification; Essence of Things;


Ethics is considered an important means of artificial intelligence governance and is also one of the focal topics of current legislative discussions. The "Measures for Scientific and Technological Review(for Trial Implementation)" (hereinafter referred to as the "Ethics Review Measures") issued by the Ministry of Science and Technology and other units in 2023, in the quasi-legislative sense, has built a special system for the ethical review of artificial intelligence. After the promulgation of the "Ethics Review Measures", the artificial intelligence industry and its compliance service community have many opinions, mainly manifested in: the "Ethics Review Measures" are drafted based on the thinking of biomedical ethics, and do not fully consider the characteristics of artificial intelligence ethics, which will bring practical operational troubles to developers and related enterprises in the process of specific compliance procedure. In order to provide systematic theoretical references for the construction and practical operation of related artificial intelligence ethical systems, we can comprehensively use the concept of "juridification" from the tradition of socio-legal theory and the concept of "essence of things" from the tradition of legal methodology, by analyzing the differences between artificial intelligence ethics and biomedical ethics, professional ethics, to meet the practical needs of artificial intelligence legislation, to present the legal nature of artificial intelligence ethics, especially how the legal system evaluates and incorporates artificial intelligence ethical norms.

1 Why adopt the thinking path of juridification?

1.1 The demand of transformation from moral principle framework to ethical compliance practice

Within applied ethics, compared with bioethics, business ethics, professional ethics, etc., the study of artificial intelligence ethics seems to be a new thing, but its theory can be traced back to the 1960s, slightly earlier than the emergence of bioethics. The latter, as a discipline, is generally considered to have emerged in the 1970s. In 1960, the "Science" magazine launched a discussion on the ethical issues brought by automation. Since then, the large-scale social application of various cutting-edge information technologies such as the Internet, blockchain, big data, and the metaverse has also brought corresponding ethical discussions, and there have been similar theoretical concepts and institutional practices such as "information ethics", "information technology ethics", and "big data ethics".

With the increasing complexity of artificial intelligence technology and its application scenarios, especially its increasing ability to perform more complex human tasks, their behavior have been became more difficult to monitor, verify, predict, and explain. Therefore, governments of various countries, various international organizations, various professional organizations, and related platform enterprises have proposed various initiatives, principles, guidelines, and guidelines on the importance and content of rules of artificial intelligence ethics in the past decade. The debate on it has gradually expanded from the most common data security governance (including data property rights protection, privacy protection, informed consent, accuracy, and deep fake) to social fairness (including distributive justice, unemployment, gender discrimination, racism, etc.),and even includes the legal status of artificial intelligence entities. After the generative artificial intelligence came into the public eye in 2022, all sectors of society have paid more attention to the governance function of artificial intelligence ethics.

Globally, in the practical affairs of artificial intelligence ethics governance ,there is a common problem that focuses on the value declaration at the principle level but lacks enforcement. The European Union may be the earliest legal region that issue specialized artificial intelligence ethical rules with "juridification" significance. As early as April 8, 2019, it issued the "Ethical Guidelines for Trustworthy Al" through the European Commission, proposing the definition of "trustworthy Al" and related ethical requirements. However, it has also brought the criticism of "difficult to operate" in compliance practice. In 2019, China also released the "Beijing Consensus on Artificial Intelligence", but this is different from the European Union's "Ethical Guidelines for Trustworthy Al ". The latter is issued by official institutions, while the former is mainly by academies of university and industry alliances. Even in the "Artificial Intelligence Act" issued by the European Union in December 2023, it only proposes mandatory ethical rules for high-risk artificial intelligence systems in principle, which are essentially no different from the "demoralized" legal rules in the past. It is better to say that it is the compliance requirements for artificial intelligence than "artificial intelligence ethics"; more importantly, the text of the bill is very long, and the legislative background and legal principles are fully elaborated, but the specific implementation of these provisions is not clear. For example, the Article 16 "proposes a ban on manipulative and exploitative practices", but how to "ban manipulative and exploitative practices" still needs specific implementation rules to be implemented. Article 81 of the bill also uses the "encouragement" method, authorizing providers to formulate their own behavioral rules to make non-high-risk artificial intelligence systems comply with the standard of "ethical and trustworthy".

In the relevant laws and regulations and administrative rules on data and algorithm regulation issued by our country in the past few years, although they have involved the principles and standards of artificial intelligence ethics, the expressions are relatively indirect. For example, 8 and Article 28 of the "Data Security Law" only mention the demand to "comply with social morality and ethics", but there are no clear explanations about what is social morality, what is ethics, and why morality and ethics need to be distinguished, the relevant provisions and legislative explanations. The two "Artificial Intelligence Law of the People’s Republic of China (Draft for Suggestions from Scholars)" recently published in China have all made special provisions for the principles of artificial intelligence ethics and their review systems, but the content of it is still abstract. The "Basic Requirements for the Safety of Generative Artificial Intelligence Services" and similar technical standards issued by the National Cybersecurity Standardization Technical Committee in March 2024 can be understood as a form of operationalization of artificial intelligence ethics compliance, but for most compliance entities, it is still indirect and unclear in the standardization of ethical requirements.

So, how can abstract and principle-based artificial intelligence ethical norms be transformed into operable ethical compliance practices? Preliminary summary, roughly can be divided into three steps: the first is to determine "what is artificial intelligence ethics", the main operation mode is to summarize the substantive content of artificial intelligence ethics into a series of principles. For example, the concept of "trustworthy, safe, and responsible AI". The second is "how to implement artificial intelligence ethics", that is, to refine the principles of artificial intelligence ethics into relevant rules and standards. For example, standardizing the principle of trustworthy artificial intelligence into the pass rate of relative product, and decomposing the principle of responsible artificial intelligence into risk responsibility sharing rules, etc. At this stage, in addition to substantive rules above, it also involves the configuration and program design of expert committee authority. The third is the "assessment of the implementation status of artificial intelligence ethics", which includes applying rules and standards to actual product research and application to verify whether a specific artificial intelligence system, service, or product complies with the corresponding ethical principles. Here, the real technical difficulty lies in the second step, which is also the key point where jurisprudence enters the discussion of artificial intelligence ethics. It should be noted that the importance of the juridification and "rule of law" approach should not be overemphasized, and the importance of technical methods at this stage should not be ignored. For example, value alignment testing is both a technical issue and a process involving the overall knowledge of humanities and social sciences.

1.2 Basic considerations of the thinking path of juridification

Juridification (Verrechtlichung) is a multifaceted and critically connotative concept, which is both descriptive and normative. The general belief is that juridification encompasses "constitutive juridification," the expansion and differentiation of law, the increase in resolving conflicts through formal legal means, the growth of judicial power, and the expansion of legal culture, etc.

From Max Weber to Niklas Luhmann, from European social theory to the American law and society movement, and to the discussions on "legal modernization and over-juridification" in Japan at the turn of the century, the fields of socio-legal studies and jurisprudence have conducted extensive discussions on the issue of juridification. For instance, Naomasa Tanaka summarized juridification into three aspects: the strengthening of legal demands, the complexity of legal norms or institutions, and the internalization of legal values, principles, norms, and procedures in people's consciousness and behavior. Based on this classification, Naomasa Tanaka, along with his contemporaries such as Takao Kuwase and Yoshihiro Muguruma, based on theoretical foundation of American critical legal studies , identifying the phenomenon of alienation emerged in the practice of "rule of law" in Japan after World War II. He encapsulated the solutions to this alienation caused by over-juridification into two types through a typological approach that mixed normative pursuit and factual description: first, "de-juridification" or "counter-juridification," which includes the informalization of legal procedures, for example, the rise of informal dispute resolution such as ADR. second, the expansion of the scope and types of law, leading to the emergence of regulatory law and autonomous law.

With the rise of cutting-edge technology, recently in the domestic academic community, scholars such as Zhu Mang have attempted to study new types of "institutional forms" like technical standards, ethical norms, and internal school regulations through the concept of juridification. Here, continuing this line of research, an attempt is made to introduce the concept of juridification into the study of artificial intelligence ethics. For easing understanding and meeting the needs of analysis in the Chinese context, juridification is simplified here as the process or phenomenon by which social norms such as social morality, religious ethics, professional ethics, technical standards, and industry customs are legalized.

It is necessary to distinguish juridification from two concepts: First, the differences between legal sources. The main difference between "juridification" and "legal sources" is that the former is a processual description; it can not only be used to describe the process of "social norms" being transformed into "legal sources," but also to describe changes in forms of power (such as the expansion of judicial power), changes in legal consciousness and culture, etc. In addition, juridification also involves connotations such as right realization and proceduralization. Second, the difference between the commonly used "rule of law" in the Chinese academic and practical area. In academic discussions related to the connection between artificial intelligence ethics and legal practice, many scholars are also accustomed to using dynamic concepts such as "rule of law" and "standardization." The main reason for using "juridification" instead of "rule of law" is explained here:

Firstly, the domestic academic community has seen a generalization in the use of the concept of "rule of law" and a narrowing of its connotations. From the principles inherent in the "rule of law," such as the supremacy of law, rights-based approach, due process, checks and balances of power, and governance by good laws, we can deduce that the technical operations of "rule of law" encompass multiple aspects including legal discovery, rights realization, interest balancing, and procedural safeguards. However, for various reasons, in our country today, the concept of "rule of law" has been increasingly generalized in its use but its connotations has been continuously narrowed down to "rule by law”.

Secondly, the connation of “rule of law” is lack of critical tendency. In the process of artificial intelligence ethic and its stipulation, the generalization of Al ethic and confusion between morality and law, as well as the creation of Al ethic review system in addition to legal compliance, has been criticized both at domestic and abroad. The relevant queries started with the EU data legislation and continued through the Al legislative process. This critique of moral generalization can also be traced all the way back to the 1970s and 1980s when biomedical ethics were in full swing. At that time, the discussion of “juridification” and “right discourse” in various social theories and critical legal theories also touch upon this point. To be brief, “rule of law” is a positive concept, as discussed by Japanese scholars at turn of the century, the connation of juridification is more likely to allow us to observe the side of “alienation of the rule of law”; and in terms of institutional measures to cope with the alienation of the rule of law, it is more capable of proposing more multidimensional and multidirectional theoretical concepts and measures to solve the problem such as “de-juridification”, “anti-juridification”.

2 How the Al ethic juridificate: exploration through the essence of things

2.1 Symbiotic Reconstruction of Ethics, Technology and Law

By reviewing the various principles, codes, guidelines, frameworks, and checklists issued by governments, research institutes, and industries, most of the substantive content of AI ethics is covered by existing legal principles or rules on networks, data, algorithms, and the protection of personal information. Continuing the discussion from the early days of the juridification of ethics in technology, we can ask the question: why is it necessary to juxtapose ethics and law in the governance of technology when these moral or ethical principles are largely covered by existing legislative texts? One of the traditional criteria for distinguishing law from ethics lies in the coercive power of the nation. However, the legal force of an AI ethical review system is articulated in the fact that certain specific AI systems that do not have an ethical review system in place or that have not gone through a specific ethical review process cannot carry out research and development or cannot enter the market. In this context, through the transmutation clause, authorization clause, etc., the substantive rules of AI ethics as moral content, even though they will not be explicitly expressed by the substantive law, have a national coercive force similar to a normative document such as a standard. So, how to grasp the similarities and differences between the two in the theory as well as in the practical operation?

The general answer from regulators and scholars advocating the uniqueness of an AI ethics regime is that, functionally, legal compliance is not enough to steer society in the right direction. Because the binary code of legal regulation is lawful and unlawful, it is not yet able to account for what, among lawful behaviors, constitutes good or best action, or is consistent with good governance. In addition, a mature discussion of the coupled, interactive evolutionary relationship between applied ethics and law, such as professional ethics, business ethics, and technology ethics, could be an answer to the above query. Following this line of thought, it can also be said that the juridification of the ethics of science and technology is in fact an important element of the “ethicalisation of science and technology governance”; with the far-reaching impacts of science and technology on the social structure and on the moral life and distribution of benefits of human beings, there has been a deep embedding and reconstruction of the three elements: law, ethics and technology. In other words, it is deeply embedded in the social, economic, political and cultural structures, and its impact on human moral life and the distribution of benefits has become increasingly far-reaching, so that the state has to intervene through legal means and embed ethics in the state's legal and regulatory system. In order to enter the main discussion better, we will leave aside the discussion on the necessity of juridification of technological ethics, and turn to “how to juridificate” and “how to better juridificate”, so as to provide theoretical guidance for the compliance practice of AI ethics.

2.2 Existing Thoughts on the juridification of Ethics in Science and Technology: Emerging Rights, Soft Law and Community Ethics

There are three main approaches to the juridification of science and technology ethics:

First, the deontological response: the study of emerging rights such as personality rights. There are scholars in the direction of private law from the protection of personality rights, advocating the regulation of science and technology ethics from legislation. In this regard, the domestic law scholars representatives include Shi Jiayou from the Renmin University of China and others. However, similar emerging rights exploration is not limited to private law scholars. For example, the study of neurological rights, which was proposed in response to the ethical dilemmas arising from neurotechnology and brain-computer interface technology, is mainly proposed by a group of neurotechnologists and ethicists, and has mainly promoted in the thoughts of constitutional law and public international law. There are some ideas that try to break through the problem of insufficient protection of personality rights from the perspective of the bundle of rights - of course the latter thinking is based on a sense of the problem of insufficient protection of personality rights in the sense of private law. In addition, there are scholars who essentially put forward the ‘dynamization of rights’ to cope with the ethical conflicts brought by the development of science and technology.

Secondly, the response idea of result theory: soft law research. In the domestic and international legal circles, public law scholars will discuss the function of science and technology ethics and its legal nature from the framework of the dichotomy of soft law and hard law, and the representative of domestic scholars include Professor Shen Hao of Peking University. Scholars of public international law have also elaborated a similar system. Zhao Peng of China University of Political Science and Law's research on the rule of law of science and technology ethics, Song Hualin of Nankai University Law School on the rule of law of artificial intelligence ethics, and Liu Changqiu of Shanghai University of Political Science and Law's thoughts on the juridification of bioethics can be classified as part of this research path. Among them, Zhao Peng's important juridification idea is to propose the necessity of sectoral law to intervene in the governance of science and technology ethics and its advantages through the uncertainty analysis of the relationship between science and technology and society. In their research, the process of juridification is mainly described by using the more positive concept of ‘rule of law’. In the English-speaking world, apart from the legal profession, there are sufficient research results in the fields of public administration, ethics and economics. The soft law approach is clearly influenced by the American regulatory or neo-administrative jurisprudence, with a distinctly consequentialist or utilitarian theoretical tradition, and intersects with the study of AI governance in public administration, sociology and other disciplines.

Thirdly, the response idea of virtue theory: the idea of community ethics. The scholars who hold this line of research are mainly some scholars from management, science, applied ethics and other professions. For example, in the question of how to transform AI ethical principles into compliance practices, there are more applied ethics as well as AI technologists who pin their hopes on the virtue ethics idea. The main features of this approach are: not defining the specific content of the AI ethical code of conduct, but focusing on the individual level of the subject of technology development, especially the social background of technologists and engineers; and eliminating the disadvantages of the principle of ‘technological neutrality’ by enhancing the sense of social responsibility of the technologists and the companies in which they work and pursuing the goal of virtue.  This is essentially the “professional ethics (of engineers)” and “business ethics” of virtue ethics. Of course, there is also the Kantian, deontological approach to ethics for engineers: a fixed set of universal principles and rules are determined beforehand for the subjects of technological development to strictly abide by.

The above lines of research are often integrated in practice. For example, the regulatory approach of techno-ethics primarily focused on biomedical ethics is construct as the professional ethics of scientists community during the research and development phase, and to achieve risk management by means of preventive and self-regulatory system such as “honor mechanism” and “threat of expulsion from the professional community” at the stage of technology development. But in the step of social application, the focus of techno-ethics is shifted to deep recognition and effective assessment of relevant stakeholder, In this way of thinking, biomedical ethics as professionally self-regulatory rules is embedded to prerequisite for administrative permits, a confirmation of illegal or administratively disorderly conduct, and accountability for illegal actions (including tort liability allocation and even criminal penalties). These measures are both preventive and corrective, continuously enhancing its legal force. As a result, it gradually evolves into a normative system that combines “soft law” and “hard law”, transcending the boundaries of public and private law, and surpassing the dichotomy of national law and social norms.

2.3 From professional ethics to technological ethics: biomedical ethics as a frame of reference.

Biomedical ethics, which can be traced back to the famous "Hippocratic Oath," calls upon physicians to help their patients (beneficence), to do no harm (non-maleficence), and to uphold confidentiality. Building upon the Hippocratic Oath and with the advent of modern "rights-based" and welfare societies, biomedical ethics has also developed principles centered around patient benefits(autonomy), and the prohibition of discrimination based on individual characteristics or personal attributes of the patient (non-discrimination). In terms of its nature, medical ethics at this stage is defined as professional ethics, and it is a form of professional ethics that formed earlier and serves as a more exemplary model compared to ethics in teaching and the legal profession.

With the continuous advancement of modern scientific research and technological exploration, it has evolved from pure speculative theoretical activities into purposeful and large-scale practical actions. Especially, the methods and pathways of scientific discoveries may pose certain dangers to society and the people involved in the scientific research process, or to values that humanity cherishes,(such as a "beautiful environment”). This can even trigger social risks. That is, to a large extent, scientific and technological activities are no longer value-neutral actions. Consequently, discussions on technological ethics, engineering ethics, and other aspects of techno-ethics have emerged in the modern West, eventually culminating in the concept of techno-ethics. As a form of responsibility ethics, techno-ethics was essentially a form of professional ethics, or "scientist ethics," in its early days. In the English-speaking world, many textbooks or general works on technological ethics and engineering ethics often directly define the aforementioned ethics as professional ethics.

At this stage, biomedical ethics gradually evolved from physicians ethics focusing on the ethics of medical practice among doctors to more broadly considering issues such as the purpose of science and the nature of life, known as "bioethics." With the advancement of technologies such as cloning and gene editing, compared to scientific research and technological exploration in nuclear science and engineering construction that can achieve safety goals through strong national regulatory mechanisms, there is a global fear of becoming "guinea pigs" (experimental lab rats) at any time. Over the past three to four decades, bioethical issues have became the focus of attention in the academic and social spheres.

Due to the changing nature of scientific and technological activities, the ethics of science and technology at this stage have gradually shown a trend of decoupling from professional ethics. This is because the driving force behind the development of science and technology stems from practical needs. What kind of technology to develop, what fields to apply it to, and how to apply these technologies are no longer determined by scientists or technology practitioners, but by those who hold the power of science and technology, including public power and capital. Therefore, the ethics of responsibility continue to expand to other personnel beyond scientists or technology practitioners, such as "the management organization of the laboratory and their heads"; the identification method of ethical compliance has also gradually evolved from "professional community determination" to "ethics review" participated by non-professional community insiders such as "legal experts, ethics experts, etc."

This section briefly reviews the historical evolution of the nature of biomedical ethics, from the professional ethics of doctors , to the professional ethics of scientists, and then to the development of techno-ethics that "transcends the nature of professional ethics." This retrospection is not intended for the study of the historya of biomedical ethics itself, but rather attempts to continue the development of artificial intelligence ethics within this tradition by reviewing the history of changes in the nature and normative content of biomedical ethics, thereby effectively exploring its normative content.

2.4 Seeking Theoretical Tools in Legal Methodology: The Nature of Things

The juridification of artificial intelligence ethics should both take the well-established and thoroughly debated biomedical ethics as a reference and base its juridification on practical operations according to the technological characteristics of artificial intelligence. There are two specific steps involved: First, it is important to focus on and summarize the specific features of artificial intelligence ethics, which are significant for enhancing the ethical sustainability of AI design and development. Second, methodologies that help explain the characteristics of artificial intelligence ethics should be employed. These methodological tools should effectively address and bridge the gap between facts and values. So, how have legal scholars in the past attempted to penetrate the barrier between facts and values (or bridge the gap)? Naturally, we would look to the toolbox of legal methodology for theoretical tools.

Here, we attempt to introduce the theoretical concept of "the essence of things," which is gradually falling into obscurity in the domestic legal academia but is suitable for deductive reasoning. Despite having various definitions, primarily based on Kaufmann's theory, the method of "the essence of things" includes the following three propositions:①Things possess an essence that manifests as characteristics or laws independent of human will; the "essence" of things lies between "constructive ideas—norms" and "things" (the specific life relationships that law confronts).②The "essence of things" has attributes that are both universally eternal and specific to different historical life relationships, thus it carries normative significance.③The "essence of things" may acquire normative force due to reality, meaning that it has the characteristics of a legal source. From this perspective, the structure and content of legal norms should follow the normative significance of the essence of things. Therefore, the key points of the theory of the essence of things can be summarized as "legal ideas or norms must be consistent with factual life and adapt to each other."

The theory of the essence of things can be illustrated with typical scenarios of its application. For example, Article 112 of the Civil Procedure Law in 2012 defines "civil false litigation" as litigation behavior "where the parties maliciously collude to infringe upon the rights and interests of others through litigation, mediation, and other means." However, in essence, "false litigation" is about parties fabricating disputes to harm the legal rights and interests of others and the public interest. It can involve collusion between parties or be carried out by a single party fabricating evidence and creating legal relationships without collusion. Therefore, once the characteristics and patterns of false litigation according to its essence, are understood, it becomes clear that both collusion among parties and a single party's fabrication manifest the social harmfulness of false litigation. Hence, both situations require legal regulation to form the "appropriate content" of the legal norms on civil false litigation.

A similar approach to the concept of legalization can be adopted, incorporating the idea of the essence of things to bridge the characteristics of "artificial intelligence" with the characteristics of "artificial intelligence ethics." That is, to effectively legalize, the broad sense of lawmakers or "law finders" derive (normative) ethical requirements from the characteristics of artificial intelligence technology and its industry. In a nutshell, the application scenario of the "essence of things" here has shifted from the field of legal application to the broader legislative scenario.

The academic criticism of the theory of "the essence of things" is mainly concentrated in two aspects: ① In terms of premises, it blurs the distinction between what is and what ought to be. ② In terms of methodology, it is unscientific. The related concepts such as analogy and types are dangerously uncertain and can easily become arbitrary tools. In the process of legalizing artificial intelligence ethics, the use of the concept of "the essence of things" for analysis can at least show its advantages in two aspects: ① In the legislative context, due to the nature of the democratization process of legislation (negotiation, interest game), compared with the legal application scenario, the requirement for the scientific nature of legal methodology is greatly reduced, which can effectively weaken the critical force mentioned earlier against theories related to the essence of things, and make it more feasible to activate this theory in legal practice. ② With the help of methods such as reasoning and typification, it can better refer to the well-legalized and fully debated ethical issues in science and technology (technology ethics) - bio-medical ethics, and thus can propose normative requirements from the characteristics of artificial intelligence technology and the social relationship changes it causes.

3 The Essence of Things Upon Which the juridification of AI Ethics Depends: Three Background Constraints

Under the guidance of the aforementioned ideas, it is possible to summarize three characteristics or binding conditions that the process of legalizing AI ethics cannot avoid: the technical embeddability with moral rules, stronger scenario specificity, and procedure depending on the technological process.

3.1 Ethical AI and Technical Embeddability Features

The embeddability of AI ethics is mainly based on the development idea of "ethical AI," which aims to make ethical principles operable and consider the scenario-specific characteristics of AI applications. From the very beginning of AI product development, embedded ethics should be integrated into algorithms to predict, identify, and resolve ethical issues in the development and application of AI. Ethical AI primarily refers to algorithms, architectures, and interfaces that adhere to AI ethics such as transparency, fairness, responsibility, and privacy protection. This AI concept is an important approach based on the practicalization of AI ethics in essence: turning abstract ethical principles into operable realities. Relatedly, the embeddability feature of AI mainly refers to the programming universally recognized ethical principles into algorithms directly through deep integration, collaboration, and interdisciplinary methods during product development or design, making them "ethical" and allowing them to evolve autonomously or semi-autonomously.

Research on the embeddability of AI ethics is not new. As early as the mid-20th century, Asimov's three laws of robotics were an idealized expression of ethical embeddability. The main content of this idea is to establish and implement a moral code system for robots through technology. In terms of scientific philosophy, it is to embed functional morality using a naturalistic methodology. Its construction strategies are basically threefold: first, top-down, which means setting a set of operable ethical norms in the intelligent agent, such as autonomous vehicles should minimize harm to others in the event of an accident. Second, bottom-up, which means allowing intelligent agents to study human behavior in related reality and simulated scenarios through machine learning techniques such as inverse reinforcement learning, so that they can establish values similar to humans and take action, such as autonomous vehicles studying human driving behavior. Third, human-machine interaction, which means allowing intelligent agents to explain their decisions in natural language, enabling humans to grasp their sophisticated logic and correct any potential issues timely. According to the mainstream discourse of scientific philosophy, through these approaches, AI can be incorporated into the moral community, thereby influencing human-machine interaction in three ways: first, AI will receive ethical attention from humans like animals; second, humans will consider that the actions of AI can be morally evaluated; third, humans will regard AI as a target for moral decision-making arguments and persuasion. However, these strategies all have obvious difficulties: how to accurately and unambiguously express and define ethical categories in code and computation? How to enable intelligent agents to accurately understand natural language and communicate deeply with humans?

In practice, the development of AI ethics embeddability is publicly visible in AI product development related to biomedicine. For example, there are full-time ethicists in George Church's synthetic biology laboratory at Harvard Medical School; and the Munich School of Robotics and Machine Intelligence (MSRM) at the Technical University of Munich hires ethicists and legal experts from the Munich Institute for the History and Ethics of Medicine and other units when producing AI-driven medical products. In addition, in some smart judicial system developments I have participated in, the algorithmization of legal procedures is essentially an important part of this approach. Although the development of AI ethics embeddability is still ongoing, it can fully establish the embeddability of this ethical rule and remind us of the need to implement the idea of ethical regulation into the embedded process of technology research and development.

3.2 High-frequency Information Flow and Stronger Scenario Specificity

The moral challenges and ethical response ideas caused by AI technology are very similar to traditional bio-medical ethics principles, and can even share most of the basic principles with it, especially "promoting welfare" and "reasonable control of risks." However, unlike bio-medical ethics before the rise of information technology, ethical issues in the field of AI have stronger uncertainty and development, from industrial applications to autonomous driving, from elderly care to medical care, to legal technology, the complexity of scenarios is dazzling. In the scenario-based ethical review or assessment process, it is necessary to formulate relevant ethical rules that adapt to the application scenario. Compared with traditional bio-pharmaceutical technology, the iteration speed of AI technology (including bio-pharmaceutical technology aided by AI such as brain-computer interfaces) is faster, and developers cannot afford to take a longer time to carry out related ethical conflict assessments. It is for this reason that some experts believe that the biggest obstacle to the effectiveness of AI ethics is the operability issue.

Helen Nissenbaum, in response to the privacy protection challenges brought by information technology, especially the difficulty in defining personal information, the weakening and virtualization of the principle of informed consent, and the imbalance of interests among information processing participants, proposed the contextual integrity theory of privacy (hereinafter referred to as the scenario theory). The core idea of this theory is to carry out targeted protection in conjunction with specific scenarios, advocating specific risk prevention and control, opposing generalized personal information protection; advocating a dynamic approach to information processing, opposing the fixation of personal information; advocating a regulatory approach that is tolerant and is useful to promote industrial development, opposing an individualistic philosophy. Although this scenario theory is narrated from a sociological perspective, it also illustrates the highly scenario-specific characteristics of AI technology and its ethical rules.

Compared with traditional bio-medical ethics, AI ethics (including the ethical field caused by applying AI systems to traditional bio-medical fields) shows the following aspects in terms of scenario specificity:

First, the difficulty in defining personal information. Under the idea of "encouraging data circulation," the chain of information or data flow is infinitely extended, and the boundary between privacy and information becomes increasingly blurred. Information that has left its original application scenario may be reused countless times, and the basis for its initial reasonable use no longer exists. In each reuse process, whether the information belongs to "personal information that needs protection" and how to "protect it by classification and grading" depends on the nature of the specific scenario, the balance of interests of various entities in the scenario, and the application of various overlapping legal and ethical principles and rules. Of course, there are similar issues in the field of traditional bio-medical ethics, such as the HeLa cell line case in 1951, where cancer cells extracted from Henrietta Lacks were repeatedly replicated and used by bio-pharmaceutical companies for profit, which is a technological ethics issue brought about by scenario transformation.

Second, the issue of the virtualization of the right to informed consent. The informed consent system, gradually formed through bio-medical technology research and development practices in the pre-information technology era, is considered a key part of the personal information protection system. However, with the advent of the information age, the fictitious nature of this system has become increasingly apparent. There are already rich research results in the relevant academic fields, which will not be repeated here.

Third, the issue of risk responsibility sharing. Following the ethical or deontological ideas that have a profound influence on mainstream jurisprudence, the risk responsibility caused by cutting-edge technology should naturally be borne by the "perpetrator who is the developer or service provider of the technology." If strictly following this "folk psychology" or "intuitive moral sense," in legal application, it would not be possible to have the "safe harbor principle" and "red flag rule" that ensure the development of online platform economies in the United States and China. However, it is worrying that in just over twenty years, with the continuous iteration of digital technology, the "safe harbor principle" has undergone several substantial modifications in the regulatory practices of China and the United States. In addition, the understanding of related ethical issues such as algorithmic discrimination, social equity, and risk control is also constantly evolving with the development of technology under the global context, making it difficult to adapt.

In the academic community, in terms of scenario-specific characteristics, there is also a discussion on whether AI ethics can be "professionalized." Some scholars believe that because of the "closed" characteristics of bio-medical technology, bio-medical ethics can be transformed into "professional ethics" for college teachers and researchers. However, AI is almost applicable to any human, so it is impossible to generate an ethics for a profession or occupation that "uses AI." This discussion may involve the classification and grading of AI ethics content.

3.3 Transparency, Explainability Challenges, and Procedural Characteristics

In AI ethics, transparency, explainability, and accountability are a set of adjacent, chain-like principles. There has been much literature discussing the technological origins and technological approaches of these ethical principles. Before the emergence of generative AI, to address issues affecting social fairness such as algorithmic discrimination, countries gradually built a complete algorithm governance system with algorithm transparency and explainability at its core. However, due to the complex internal working mechanisms of generative AI based on large models, it has low transparency, difficulty in explainability, and algorithmic accountability dilemmas. Although the academic community generally believes that the "emergent" phenomenon that cannot be explained is temporary, it has seriously impacted and even subverted the previous algorithm governance system. The "Interim Measures" attempt to solve this problem through the governance model innovation that "from algorithm governance to model governance" ,but there is still a long way to go in terms of executability.

Against this background, it is very unrealistic to idealistically assign responsibilities to developers or applicators from complete explainability. At this time, we may be able to draw on the idea that "professional ethics is a procedural ethics," and return to the idea of due process to regulate AI ethics. Of course, using the principle of due process to regulate the application of AI, especially the restriction of public power, is not a new topic. However, how to implement the principle of technical due process still needs to be resolved as a technical practice issue, which requires returning to the issue of ethical technology embeddability mentioned earlier. We may need a broader perspective and start from the design of AI cognitive architecture and the built-in nature of procedural mechanisms.

For traditional bio-medical ethics, the nature debate between procedural law and substantive law essentially does not exist: the procedural ethics of bio-medical ethics are mainly centered around "review", and the substantive content of previous bio-medical ethics is relatively certain. However, the strong scenario-specific characteristics of AI ethics mean that many ethical rules need to be generated on the spot during ethical review, and under this background, the processual and procedural characteristics of AI ethics are more apparent, requiring us to focus on detailed institutional design from the perspective of procedural law and other related angles in the ethical review process.

4 Conclusion

At present, all entities in the practical field are eagerly expect AI ethics compliance systems that can meet the requirements of operability, predictability, and computability. Through the analysis of various AI ethics principles, it can be found that almost all requirements of AI ethics principles have been covered by the data and algorithm regulatory systems of the world's major countries and regions. Therefore, the current common use of concepts such as "AI ethics legalization" or "AI ethics standardization" cannot clearly and effectively serve as regulations of the practical operation of AI ethics compliance to construct goals and analytical tools. What's more, these concepts cannot present or in other words, obscure, the problem consciousness of many opposing discussions in AI legislation, such as "safety and innovation" and "regulation and development."

The original use of the juridification should be synonymous with "rule of law" in the current Chinese context. However, over time, "rule oof law" has gradually become a vague concept similar to "standardization" and "institutionalization." In contrast, through the continuous enrichment of the theoretical context of legal sociology, " juridification" not only has a rich connotative descriptive function but also has a reflective function on the rights certification, procedural protection, and other "rule of law goals." More importantly, in the current era where cutting-edge technology has a profound impact on social practice, through the concept of legalization, it is also possible to better analyze the legal nature of some traditional regulations that do not belong to the traditional formal legal system, such as technology ethics, technical standards, and internal standards of enterprises and institutions.

The existing juridification pathways of technology ethics mainly include three: the emerging rights certification commonly used by deontologists (such as personality rights), the soft juridification commonly used by consequentialists (such as government regulatory tool innovation), and the community ethics commonly used by virtue theorists (such as professional ethics, business ethics). So, based on the above research, how to achieve the juridification path of AI ethics from a practical perspective? Obviously, the most convenient way is to compare with the highly juridified bio-medical ethics (of course, there is also a way of continuation on the tradition of bio-medical ethics), abstract the characteristics of AI ethics, and then derive the specific (normative) ethical requirements suitable for AI.

The essence of things theory in legal methodology mainly aims to "bridge the gap between facts and values with the concept of the essence of things" during the application of the law. Although there have been some applied results in departmental legal theory research recently, due to the low scientific level, it has been relatively silent overall. However, in the scenario of AI ethics juridification, due to the nature of the democratic process of legislation in a broad sense (negotiation, interest game), it may be an effective theoretical tool for gradually deriving unique ethical requirements through the characteristics of AI technology and the social relationship changes it causes. Under the guidance of this idea, we have summarized three binding elements of AI ethics juridification: the technical embeddability of moral rules, stronger scenario specificity, and procedural dependence on the technological process.

In the previous text, the bold concept of "legalization," which emerged in the era of pluralistic values with a distinct critical tradition, and the concept of "essence of things," which emerged in the era of monistic values with the main purpose of legal application, are integrated into the discussion of AI ethics, a new thing. There is still a lot of supplementary work to be done in deepening theoretical concepts and the rigor of argumentation. More importantly, many current AI studies also enlighten us that research on the combination of AI and society must return to the research approach of complexity to obtain better solutions. For example, AI ethics should mainly rely on a bottom-up generation approach, and if using complex adaptive systems, better analysis may be obtained. However, due to space limitations, this cannot be further expanded.