Location : Home > Resource > Paper > Theoretical Deduction
Resource
​SONG Hualin | The construction of artificial intelligence ethical norms in the perspective of the rule of law
2024-04-01 [author] ​SONG Hualin preview:

[author]​SONG Hualin

[content]

The construction of artificial intelligence ethical norms in the perspective of the rule of law



*Written by SONG Hualin

Dean and Professor of the School of Law, Nankai University



The AI system is software developed with one or more AI technologies and methods to generate outputs such as content, predictions, recommendations, or decisions that affect the environment in which humans interact with a specific set of goals. Ethical risk refers to the uncertain events or conditions that may arise due to positive or negative effects in the ethical relations between people, between people and society, between people and nature, and between people and themselves, especially the uncertain negative ethical effects. At the technical, data, application and societal levels, AI ethical risks may arise: the technical level involves algorithm and system security, interpretability, algorithmic discrimination and algorithmic decision-making; The risks inherent in circulation and exploitation, the risks at the application level are reflected in the abuse and misuse of algorithms in AI activities, and at the social level, the application of AI may lead to inequality and indirectly lead to ethical issues such as unemployment and wealth redistribution.

On October 30, 2018, when presiding over the ninth collective study of the Political Bureau of the CPC Central Committee, General Secretary Xi Jinping pointed out that it is necessary to "strengthen the research on legal, ethical and social issues related to artificial intelligence, and establish and improve laws, regulations, institutional systems, ethics and morals to ensure the healthy development of artificial intelligence". It is necessary to take seriously and prevent the ethical risks of AI, establish a legal and ethical framework to ensure the healthy development of AI, and formulate laws, regulations and ethical norms to promote the development of AI.


1. Adhere to the "ethics first" of AI governance


Ethics is an intricate concept that can be defined as moral principles that regulate the behavior of individuals or groups. In other words, ethics is a set of principles, rules, or norms that help determine what is good or right, and can also be seen as a discipline that defines right and wrong, and defines the moral obligations and responsibilities of human beings or artificial intelligence. Law and ethics are very different in terms of the level of normative value, the scope of normative adjustment, the way of regulation, and the degree of coercion. Legal norms are norms of behavior that regulate conscious behavior, while ethical norms include values and codes of conduct. Legal norms are of the form of other laws, and are norms formulated by the state and backed by the coercive power of the state to ensure their implementation; ethical norms are more self-disciplined, formed by society, and mainly rely on the conscious observance of them by members of society. In the field of artificial intelligence, it is necessary to advocate the design and development of "ethical" AI ethical norms and integrate the concept of "ethics first" throughout the entire process of AI development, design and use, so as to promote the coordinated development and benign interaction between AI science and technology activities and AI ethics, ensure that AI is safe, reliable, and controllable, and realize responsible AI innovation.

The "ethics first" in AI governance is also a vivid embodiment of inclusive and prudential supervision. Inclusive and prudential regulation requires regulators to have a tolerant attitude towards new business formats, and to encourage, protect, and tolerate innovation. Article 55 of the Regulations on Optimizing the Business Environment stipulates that "the government and its relevant departments shall, in accordance with the principle of encouraging innovation, implement inclusive and prudent supervision of new technologies, new industries, new business forms and new models." The author believes that it may be difficult to restrict rights and obligations through legislation at this time, and we should adhere to the "ethics first", based on China's own artificial intelligence development stage and social and cultural characteristics, follow the law of scientific and technological innovation, and gradually establish an artificial intelligence ethics system in line with China's national conditions. Ethical constraints shall be integrated into all aspects of the AI research and development process, and corresponding ethical requirements shall be strictly complied with in data collection, storage, and use, and the use of AI technologies and applications that violate ethical requirements shall be prohibited. The purpose of AI ethical governance is not to "put the brakes" on AI technology and applications, but to encourage innovation and set feasible conditions for the exploration and innovation of AI technology frontiers.

The principle of "ethics first" implies an examination of the relationship between ethical norms and legal norms. To a certain extent, ethics is an important supplement to law, and ethical concepts dominate the values contained in laws and affect the nature and direction of laws. AI ethics preceded legal norms, and future AI legislation could incorporate the core elements of ethical norms into the legal framework. For example, the General Office of the Central Committee of the Communist Party of China and the General Office of the State Council proposed in the "Opinions on Strengthening the Governance of Science and Technology Ethics" issued in 2022: "Promote the basic legislation of scientific and technological innovation to make clear provisions on the supervision of science and technology ethics, investigation and punishment of violations, and implement the requirements of science and technology ethics in other relevant legislation." In 2023, the Cyberspace Administration of China and other 7 departments issued the "Interim Measures for the Management of Generative AI Services", which stipulates that the provision and use of generative AI shall comply with social morality and ethics, and includes requirements such as adhering to the core values of socialism, preventing discrimination, not carrying out monopoly and unfair competition, respecting the legitimate rights and interests of others, and improving service transparency, which to a certain extent reflects the absorption of ethical rules by AI legal norms.


2. Formulation of ethical norms for artificial intelligence


In emerging risk areas such as AI, legislators are unable to specify in detail the activities that are relevant to AI risks and the safety standards and requirements to be followed. Moreover, emerging risks are also in dynamic development, and the time is not yet ripe for China to unify AI legislation, and it is impossible to establish the code of conduct for AI activities by law. Therefore, there is an urgent need to introduce ethical norms for artificial intelligence. The consideration of introducing ethical norms is not only to guide the basic direction of scientific and technological development, but also to provide flexible space for relevant R&D institutions and enterprises to choose according to specific technical scenarios. After accumulating sufficient experience in AI governance through ethical norms, it is possible to consider gradually replacing AI ethics norms with more detailed and precise laws.

For instance, the U.S. Department of Defense issued the Ethics Guidelines for Artificial Intelligence in February 2020, and the Office of the Director of National Intelligence issued the Ethics Guidelines for Artificial Intelligence for the Intelligence Community and the Ethical Framework for Artificial Intelligence in Intelligence Systems in July 2020. Article 69(1) of the European Commission's Artificial Intelligence Act of 2021 stipulates that the European Commission and Member States shall encourage and facilitate the development of a code of conduct whereby the requirements set out in Chapter 2 of the draft are voluntarily applied to AI systems other than high-risk AI systems, based on technical specifications and solutions, which can be tailored to the intended use of the AI system and constitute an appropriate means of ensuring system compliance. On March 29, 2019, the Japanese Cabinet adopted the "Principles for a Human-Centered AI Society", which established the highest standards for ethical supervision of AI in Japan, which set out seven principles for the development and utilization of AI. In 2019 and 2021, the Ministry of Economy, Trade and Industry (METI) formulated the Guidelines for the Use of Artificial Intelligence and Contracts, which are used to guide companies to protect privacy and rights in the process of data utilization and AI software development, and the latter to guide companies in establishing an AI ethics regulatory system.

To a certain extent, the formulation of China's AI ethics norms reflects the characteristics of "top-down". In 2017, the State Council issued the "New Generation of Artificial Intelligence Development Plan", which proposed to give better play to the important role of the government in the formulation of ethics laws and regulations, and establish an ethical and moral framework to ensure the healthy development of artificial intelligence. In 2022, when the General Office of the Central Committee of the Communist Party of China and the General Office of the State Council issued the "Opinions on Strengthening the Governance of Science and Technology Ethics", it was pointed out that the member units of the National Science and Technology Ethics Committee are responsible for the formulation of science and technology ethics norms and other related work in accordance with the division of responsibilities, and proposed to formulate science and technology ethics norms and guidelines in key areas such as artificial intelligence. In 2020, the Standardization Administration of the People's Republic of China and other five departments promulgated the "Guidelines for the Construction of a National New Generation of Artificial Intelligence Standard System", which pointed out that it is necessary to establish ethical standards for artificial intelligence, especially to prevent the risks caused by the impact of artificial intelligence services on traditional morality, ethics and legal order.

In June 2019, China's National Committee for the Governance of Next-Generation Artificial Intelligence issued the Principles for the Governance of Next-Generation Artificial Intelligence – Developing Responsible Artificial Intelligence, emphasizing eight principles: harmony and friendliness, fairness and justice, inclusiveness and sharing, respect for privacy, security and controllability, shared responsibility, openness and collaboration, and agile governance. In September 2021, the National Committee for the Governance of the New Generation of Artificial Intelligence issued the Code of Ethics for the New Generation of Artificial Intelligence, which put forward six basic ethical requirements for improving human well-being, promoting fairness and justice, protecting privacy and security, ensuring controllability and trustworthiness, strengthening responsibility, and improving ethical literacy, and put forward 18 specific ethical requirements for specific activities such as AI management, research and development, supply, and use.

Emerging technologies such as artificial intelligence (AI) are characterized by rapid iteration, strong uncertainty, complexity, and potential risks. The introduction of AI ethics is also the embodiment of "agile governance". "Agile governance" is "an action or approach that is flexible, fluid, flexible, or adaptable." Its characteristics can be summarized in the following two points.

First, the formation of AI ethics norms has a wide range of participation, which requires the participation of governments, enterprises, consumers and other stakeholders in the norm formation process. In the process of forming AI ethical norms, through the introduction of programmatic concepts and participatory procedures, different subjects can express their own views, preferences and positions, so as to form a mechanism to promote and encourage consultation and mutual learning between organizations. Taking the "Guidelines for the Standardization of Ethical Governance of Artificial Intelligence" promulgated in March 2023 as an example, it was led by the China Electronics Standardization Institute, relying on the National Artificial Intelligence Standardization Group and the Artificial Intelligence Sub-Technical Committee of the National Beacon Committee, and organized 56 government, industry, academia, research and use units such as Zhejiang University and Shanghai company of SenseTime . to jointly compile and complete the compilation.

Second, the ethical norms of artificial intelligence also reflect the typical form of "reflective law". In the dynamic evolution of AI governance, the performance of AI ethics can be regularly evaluated, and whether the contents of the AI ethics system need to be changed, so as to make appropriate amendments to the principles and content of AI ethics. Compared with legal norms, ethical norms are "living documents", which are easier to continuously supplement and amend, and quickly and flexibly respond to the ethical challenges brought about by AI innovation by dynamically adjusting governance methods and ethical norms in a timely manner.


3. The basic principles that should be adhered to in the ethics of artificial intelligence


Due to the limitations of historical conditions and development stages, there is a lag in human cognition of the moral hazard of AI products, and there is often a lack of perfect ethical control over AI products, and at the same time, these products are given more independent decision-making power, which gives rise to more ethical and moral problems. Therefore, it is more necessary to take the form of government leadership and multi-subject participation to jointly promote the formation of AI ethical norms. As an ethical norm for science and technology, the ethical norms for AI should also embody the principles of promoting human well-being, promoting fairness and justice, protecting privacy and security, maintaining openness and transparency, and strengthening accountability.


3.1 Improving human well-being

The Constitution of the People's Republic of China  does not explicitly stipulate the national task of "improving the well-being of mankind", but in the preamble of the Constitution, it is stated that it is necessary to "promote the coordinated development of material, political, spiritual, social and ecological civilizations". Article 47 of the Constitution stipulates the freedom of citizens to conduct scientific research, according to which the State has the obligation to "encourage and assist" those engaged in scientific and technological undertakings for "creative work that is beneficial to the people". The wide application of artificial intelligence in education, medical care, elderly care, environmental protection, urban operation and other scenarios will improve the precision of public services and comprehensively enhance human well-being.

At the November 2020 on the rule of law to the Chinese context , General Secretary Xi Jinping pointed out that comprehensive law-based governance "must persist in serving the people and relying on the people." A people-centered approach must be taken. The fundamental goal of promoting law-based governance is to protect the people's rights and interests. To advance overall law-based governance is to protect people's rights, meet their demands and constantly enhance their sense of fulfillment, happiness and security. " The development and application of artificial intelligence should adhere to the people-centered development philosophy, follow the common values of mankind, respect human rights and the fundamental interests of mankind, and abide by ethics and morality. The development and utilization of artificial intelligence should promote the harmony and friendship between man and machine, so as to build an intelligent society, promote the improvement of people's livelihood and well-being, and continuously enhance people's sense of gain and happiness.

Along the lines of this argument, the development and use of AI must not infringe on "human dignity". Human beings cannot be seen as objects or tools, and respect for "human dignity" means that each individual member of society receives a minimum of positive guarantees of a dignified life from society.


3.2 Promote fairness and justice

In the application of artificial intelligence, there may be algorithmic discrimination and data discrimination. Algorithms may contain value judgments, may inappropriately give undue weight to specific objects, specific projects, and specific risks, and may even implant improper purposes, and it is difficult for algorithms to fully consider factors other than rules and numbers, and the results of algorithm learning may also be unpredictable. The data used in AI applications may also lack balance and representativeness, and the data itself may be biased, which will affect the fairness and impartiality of AI activities.

In the application of artificial intelligence, the principle of promoting fairness and justice should be adhered to, that is, adhering to the equal concept of "the same things are treated the same", and the same distribution should be given to people or groups with the same important characteristics. First, when using artificial intelligence to carry out administrative law enforcement, judicial adjudication, or allocate limited social resources, factors such as the activities carried out by the audience, the results generated, and the actual needs of the individual should be considered, so that the audience can "get what it deserves" as much as possible. Second, the application of AI should be inclusive and inclusive, and efforts should be made to reduce, reconcile and even eliminate the state of "de facto inequality", so that everyone can equally enjoy the opportunity to share AI with the whole society on the basis of the same foothold, promote social fairness and justice and equal opportunities, and consider the specific needs of different ages, different cultural systems and different ethnic groups. Third, it is necessary to avoid discrimination and prejudice against different or specific groups in the process of data acquisition, algorithm design, technology development, product development and application.


3.3 Protect privacy and security


3.3.1 Protect privacy

Privacy refers to the tranquility of a natural person's private life and the private space, private activities, and private information that they do not want others to know. Natural persons have the right to privacy. The privacy rights of others must not be infringed upon by any organization or individual through methods such as espionage, intrusion, leakage, or disclosure. The use of artificial intelligence is premised on the deep learning of algorithms, but as a data-driven technology, deep learning needs to collect a large amount of data, which may involve private information such as user interests, hobbies, and personal information. In addition, when using artificial intelligence technology to collect, analyze and use personal data, information, and speech, it may bring harm, threat, and loss to personal privacy.

As far as privacy protection is concerned, the R&D and application of artificial intelligence must not provide products and services that infringe on personal privacy or personal information rights and interests. Providers of AI services shall comply with the relevant provisions of laws and administrative regulations such as the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law, as well as the relevant regulatory requirements of relevant competent authorities. When carrying out the research and development and application of artificial intelligence, personal information shall be handled in accordance with the principles of legality, legitimacy, necessity and good faith. The processing of personal information shall be premised on the voluntary and explicit consent of the individual on the premise of full knowledge. Individuals' lawful rights and interests in data must not be harmed, personal information must not be illegally collected or used by means such as stealing, altering, or leaking, and personal privacy must not be violated.

3.3.2 Stay safe

According to Article 76 of the Cybersecurity Law, network security "refers to the ability to ensure the integrity, confidentiality and availability of network data by taking necessary measures to prevent attacks, intrusions, interference, sabotage, illegal use and accidents on the network, so as to keep the network in a state of stable and reliable operation". The security of the AI system should be guaranteed, the algorithm should not be controlled by hackers, and the system and algorithm should not be hacked or changed. At the same time, attention should be paid to personal safety in AI activities, that is, to ensure that AI technology does not harm humans. Therefore, it is necessary to strengthen the cybersecurity protection of AI products and systems, build an AI security monitoring and early warning mechanism, and ensure that the development of AI is regulated within a safe and controllable range.

In addition, according to the importance and degree of harm of AI systems, AI systems may be divided into three levels: medium and low risk intelligent systems, high-risk intelligent systems and ultra-high-risk intelligent systems. The relevant competent departments of the state should improve scientific regulatory methods that are compatible with innovation and development based on the characteristics of different AI technologies and their service applications in relevant industries and fields, and formulate corresponding rules or guidelines for categorical and hierarchical regulation. For example, for high-risk and ultra-high-risk AI applications, the regulatory model of pre-assessment and risk early warning can be adopted, and for medium- and low-risk AI applications, the regulatory model of ex-ante disclosure and ex-post tracking can be adopted. This will help to better allocate limited regulatory resources to ensure the safe use of AI.


3.4 Maintain openness and transparency

In the field of AI ethics, openness and transparency of AI refers to the disclosure of the source code and data used in AI systems without harming the interests of AI algorithm owners, so as to avoid "technical black boxes". For example, article 16 of the Provisions on the Administration of Algorithmic Recommendation for Internet Information Services provides: "Providers of algorithmic recommendation services shall inform users in a conspicuous manner of their provision of algorithmic recommendation services, and publicize the basic principles, purpose, and main operational mechanisms of algorithmic recommendation services in an appropriate manner." "Consideration could be given to making the algorithm process public, disclosing the appropriate records generated when validating the algorithm, and disclosing to the public how the algorithm was developed and what considerations were taken into account when developing the algorithm. However, the degree of disclosure of the algorithm should be determined in combination with the specific scenario and the specific object, and sometimes the algorithm should be disclosed, sometimes it is only suitable for a small range of disclosure, and sometimes it is even not disclosed, and the disclosure of the algorithm should not be applied mechanically as a general principle. Completely disclosing the code and data of the algorithm may leak the sensitive privacy data of individuals, may damage the trade secrets and competitive advantages of the AI system design entity, and may even endanger national security.

Transparency is also reflected in the explainability of AI systems. Paragraph 3 of Article 24 of the PIPL stipulates that: "Where a decision is made through automated decision-making that has a significant impact on an individual's rights and interests, an individual has the right to request an explanation from the personal information processor, and has the right to refuse the personal information processor to make a decision solely through automated decision-making." "A statement of reasons helps to safeguard the procedural rights of the parties and enhances the acceptability of the decision. Therefore, when AI products and services have a significant impact on the rights and interests of individuals, AI users have the right to request the provider to explain the process and method of decision-making on the products and services, and have the right to complain about unreasonable interpretations. When an AI provider makes an interpretation, it is first and foremost a partial interpretation, which is an explanation of a particular decision and does not require an explanation of the activities of the AI system as a whole. The second is an explanation of causality, which factors exist and why they lead to the same result. However, it is not necessary to explain too much about the technical details of the system.

3.5 Strengthen accountability

Accountability applies to different actors in AI activities. Accountability is part of good governance and is associated with accountability, transparency, responsiveness and responsiveness. Accountability is explanatory and there is a duty to describe or explain the act taken, and accountability is also corrective and there is a responsibility for correcting errors if they occur. Or the accountability in the use of artificial intelligence can be decomposed into six elements: "who is responsible", "who is responsible", "what standards are followed to take responsibility", "what matters", "what procedures are adopted", and "what results should be produced". Accountability consequences enable the accountability of multiple subjects in the AI activity network, through a seamless accountability approach, and through a procedural mechanism set in accordance with the law, so that the accountability system can be matched with the AI activities being held accountable.

AI systems are a collection of datasets, technology stacks, and complex human networks, so accountability relationships are often complex. Artificial intelligence technology has the potential to replace human labor and even control people's spiritual world, but artificial intelligence is to fragment, segment and decentralize human characteristics and skills. AI may be seen as a "person with a specific purpose", but it can only play a role in a specific field, in a specific aspect, and in a specific link. Artificial intelligence cannot completely replace people, let alone weaken people's subjectivity. Therefore, in the process of using artificial intelligence, it is necessary to insist that human beings are the ultimate responsible subjects, clarify the responsibilities of stakeholders, comprehensively enhance the sense of responsibility, introspect and self-discipline in all aspects of the whole life cycle of artificial intelligence, establish an artificial intelligence accountability mechanism, and do not evade responsibility review and responsibility.

4. Establish a multi-faceted and overlapping system of AI ethics norms

The above discusses the possibility of introducing the principle of "ethics first" in the whole life cycle of AI utilization, and discusses the general status of China's AI ethical norms, as well as what substantive principles and core concerns should be contained in ethical norms, and how to internalize these principles and concerns in China's AI legal rules and policy system.

It should be pointed out that AI ethics cannot be relied on to respond to all the problems of AI activities in China. As far as the content formation and implementation mechanism of AI ethics norms are concerned, there may still be the following problems: first, AI ethics norms are often guiding norms or advocacy norms, and if the core content of AI ethics cannot be embedded in legal norms, many contents of the ethics code are still advocacy and the industry may not necessarily follow them; second, the content of AI ethics norms may be vague or too idealistic, and the industry may not know what measures to take to implement the requirements of AI ethics; third, it is difficult to confirm violations of AI ethics normsIt is also difficult to impose follow-up sanctions for violations of ethical norms.

In future AI legislation, ethical principles and norms should be clearly given a place. In the author's opinion, it can be stipulated in the legislation that "engaging in AI research, development, application and related activities shall comply with ethical principles and ethical norms". However, the subject of the formulation of ethical norms is not limited to administrative departments, and it may be clearly stipulated in the law that societies, associations, industrial technology alliances, etc. have the right to formulate and implement AI ethical norms by promulgating standards or rules. It should be noted that paragraph 1 of Article 18 of the Standardization Law stipulates that: "The State encourages social organizations such as societies, associations, chambers of commerce, federations, and industrial technology alliances to coordinate with relevant market entities to jointly formulate group standards that meet the needs of the market and innovation, and the members of the Association shall agree to adopt them or provide them for voluntary adoption by the society in accordance with the provisions of the Association." "Group standards are standards for voluntary adoption by members of a group or society. The formulation of self-disciplined AI ethics norms by societies, associations, and industrial technology alliances will help formulate ethical norms on the basis of consensus and better adapt to changes in the AI industry. This not only stimulates the power of society, but also uses professional knowledge to seek self-regulation, shortens the distance between rule-makers and the public, contributes to the implementation of ethical norms, and allows AI ethical norms and legal norms to replace and complement each other, thus constituting an overlapping rule system.

Individual enterprises should be encouraged to promulgate AI ethics codes that regulate their own enterprises to implement credible requirements for AI technologies, products, and services. Since 2018, Google, Microsoft, IBM, Megvii, Tencent and other domestic and foreign enterprises have launched enterprise AI governance guidelines and set up internal institutions to implement AI governance responsibilities. The author believes that in future AI legislation, it should be clearly stipulated that units engaged in AI science and technology activities should fulfill the main responsibility of AI ethics management, and strive to formulate their own AI ethical norms or guidelines. For units engaged in AI science and technology activities, when the research content involves sensitive areas of science and technology ethics, an AI ethics review committee should be established, and the responsibilities of AI ethics review and supervision should be clarified, and the rules and processes for AI ethics review, risk handling, and violation handling should be improved.

When the law sets the obligations of market players in the field of artificial intelligence to formulate AI ethics norms and establish AI ethics review committees, it embodies the essence of "regulation of self-regulation" or "meta-regulation", which colludes between administrative regulation characterized by imperative means and self-regulation of private subjects, forming a "hinge". Meta-regulation has opened up a net of "negligence but not leakage", leaving enough flexibility for the activities of the self-regulation system, but the secret lies in the fact that when the failure of self-regulation occurs, the stability of the AI regulatory structure can still be realized.

It should be pointed out that the requirement of due process of law and public participation should not be abandoned because of the strong scientific and technological background of AI ethics. In the construction of a multi-faceted system of AI ethics norms, a more appropriate democratic review process should be constructed, so that stakeholders, including the general public, social groups, and news media, can discuss corresponding AI policy issues under the conditions of sufficient information, equal opportunities for participation, and open decision-making procedures, so that different voices can enter the arena of the formation of AI ethical norms, so that different interests can be properly measured, so as to better condense scientific consensus, so as to ensure the legitimacy of ethical norms. Purposefulness, democracy, and transparency, so as to improve the scientificity, effectiveness, and flexibility of ethical norms, and effectively provide assistance and guidance for the AI industry.