[author]Ji Weidong
[content]
Ji Weidong, Shen Wei, et al. | AI Governance Multichannel - Voice of China and Dialogue with Professor Simon Chesterman
Four Models of Artificial Intelligence Legislation and International Cooperation Consensus
Ji Weidong
Senior Professor of Humanities at Shanghai Jiao Tong University, Dean of the Chinese Academy of Law and Society, Co Chairman of the First Council of the Global Artificial Intelligence Network at the United Nations University, and President of the Computational Law Branch of the Chinese Computer Society (CCF).
The 2024 Nobel Prize in Physics and Chemistry were awarded to the discovery and invention of artificial intelligence technology and its scientific applications, causing a shock and making everyone more aware that we are in the midst of the digital revolution of artificial intelligence. There is no doubt that artificial intelligence can significantly improve efficiency. But will artificial intelligence lead the world into a post human or post humanistic era? This is still a problem that is sparking debate. Related to this, Yuval Harari mentioned an important concept in "A Brief History of the Future" called "Homo Deus", which seems to have not received sufficient attention in China. Will post humans, transhumans, gods, or superheroes form a crushing force on humans and their individuals, which may pose a serious challenge to the modern governance system of nations.
In the past two to three years, large-scale models have been iterating rapidly, and generative artificial intelligence is affecting every aspect of daily life. The generative discourse order formed through human-machine coexistence and human-machine dialogue is enabling everyone to live in a collective interaction. The concept of 'me' is bound to become relative, and 'we' will increasingly become the subject of the living world. In other words, in the current era of the strong rise of generative artificial intelligence, the trend of philosophical and legal thinking shifting from the individual "I" to the collective "we" has actually become more difficult to stop. Although the basic unit of computer system users is still based on individuals, authentication and data management are still carried out according to the principles of one person, one account, and autonomous individual freedom, the self that was once inflated in social networks, after entering the stage of generative artificial intelligence, is increasingly integrated into large models, increasingly integrated into interactive collections, and transformed into the "us" in the discourse order and re presented. From individual centered to collective centered, this is a fundamental change in social and research paradigms that both computational law and digital rule of law have to face.
From a legal perspective, traditional judicial judgments are largely based on empirical rules. Therefore, judges, lawyers, and parties have to engage in one "incomplete information game after another". Given the limitations of time and information, judges must abstract the specific circumstances of the case and turn them into simple rules for handling. Only in a few simple cases does there exist a "complete information game" where both facts and laws are clear and distinct. However, artificial intelligence with machine learning capabilities selects the best solution through a full dataset in a vast database. Therefore, the essence of smart justice is to transform all incomplete information games into complete information games for processing, so as to make complex responses to specific situations. Of course, artificial intelligence has always been limited by the so-called framework problem and symbol grounding problem, and it is difficult to fully reflect human intuition, common sense, tacit knowledge, and value judgments. However, the language big model and multimodal big model incorporate all movement data, including intuition, common sense, tacit knowledge, and value judgments, as well as all expression data such as images, sounds, and videos, into the scope of learning and refining models. This can to a considerable extent resolve the so-called framework and symbol grounding issues. Moreover, humanoid robots can increase the knowledge elasticity of artificial intelligence by setting physical or physical boundaries and limiting the scope of information processing. This has the potential to go beyond deep learning and not rely solely on data scale to determine victory or defeat. The rapid advancement of artificial intelligence technology means that the trial methods of smart courts will also achieve a leap from quantitative to qualitative changes.
For a long time, artificial intelligence has indeed relied on algorithms and has basically belonged to the world of computation. Therefore, when considering the relationship between law and artificial intelligence, if we only talk about computational law and the interpretability and fairness of algorithms, everything will basically be fine. In this sense, the relationship between computational law and artificial intelligence governance can be summarized as XAI (explainable artificial intelligence). However, in today's situation, rapid iteration of generative artificial intelligence, coexistence between humans and machines, and human-machine dialogue have become the norm in society. The research on artificial intelligence and law in computational law must also introduce a new keyword: CAI - Co evolutionary AI, which means human-machine co evolutionary artificial intelligence. This means that the research and development of artificial intelligence is entering a new stage, and the governance of artificial intelligence is bound to follow suit. It is very likely that there will be some kind of intelligence that humans cannot understand. When artificial intelligence can write code and modify programs on its own, the inherent black box nature of big data will inevitably be further strengthened in generative artificial intelligence, and even trigger the risk of AI losing control and social unrest.
Given the new situation of artificial intelligence development, AI security has become a focus of attention in today's world. From the perspective of AI security, how to properly treat AI legislation will inevitably become a central topic in the field of AI and legal research. On March 13, 2024, the European Parliament passed the Artificial Intelligence Act, which came into effect on August 1, and most of the rules will be implemented from August 2, 2026. It is obvious that the legislative body of the European Union has established a value ranking of safety over development, requiring member states to further strengthen their supervision of the expansion of technological frontiers and application scenarios. Its basic characteristic is that regulation takes precedence over research and development. The EU's Artificial Intelligence Law categorizes AI risks into four levels: unacceptable, high, limited, and minimal, and specifies different regulatory approaches for each level. Of particular note is that the law considers AI applications that are extremely harmful and contrary to European values, including the manipulation of individual behavior through social scoring systems, the application of real-time remote biometric recognition technology, and the introduction of predictive policing systems, all of which are prohibited. In addition, the legal expert system that assists judges and lawyers, as well as the intelligent trial project, have also been identified as high-risk types and require focused supervision. In sharp contrast, the "AI Bill of Rights Blueprint" released by the White House Office of Science and Technology Policy in October 2022 and the AI Risk Management Framework released by the National Institute of Standards and Technology in January 2023 are also declarations of principles and policies, and do not have any legal binding force; Against the backdrop of global public concern over security issues caused by large-scale models, the Biden administration reached a voluntary commitment on AI research and application with seven leading companies in the field of artificial intelligence on July 21, 2023, which also has no legal binding force. In fact, China was the first to advocate agile governance principles for artificial intelligence in 2019, which is highly flexible like the United States. But on September 25, 2021, the National Professional Committee for the Governance of New Generation Artificial Intelligence issued the "Ethical Norms for New Generation Artificial Intelligence", which includes the responsibility review and accountability mechanism for all stages of the AI lifecycle as one of the basic principles. On March 1, 2022, the "Internet Information Service Algorithm Filing System" was officially launched, forming a basic framework of "trinity" of algorithm filing, algorithm inspection and algorithm accountability for the supervision of artificial intelligence. In particular, the 2021 special action on algorithm abuse governance and the 2022 special action on comprehensive algorithm governance have severely punished violations, demonstrating the rigid side of regulation.
Given the above facts, my speech at the International Symposium on AI at the United Nations University Macau Institute in the spring of 2024 analyzed four models of AI legislation, namely the EU's hard law model, the US and Japan's soft law model, China's soft hard hybrid model, and Singapore's procedural technology model. It is worth noting that Singapore has taken a low-key and pragmatic technological path in implementing the principles and policies of artificial intelligence governance. In May 2022, the government of the country launched the world's first open-source testing toolbox for artificial intelligence governance, "AI. Verify," which integrates testing and process inspection to achieve the goal of trustworthy AI through secure, flexible, transparent, auditable, accountable, and mutually balanced procedures. Although Professor Simon Chesterman humbly emphasized in his previous speech that this is only an experience suitable for a small city-state, I believe that this' technology procedural approach 'is both a characteristic of Singapore's artificial intelligence governance and has the potential for widespread promotion. For example, IBM's artificial intelligence governance model "Watson X. Governance" launched in December 2023 closely integrates risk prevention and automated regulatory tool development, providing AI "nutrition labels" according to AI regulations and policies, and achieving active detection and reduction of bias LLM indicators, which is similar to "AI. Verify". In addition, the Image World Model (IWM) that enhances AI's self supervised learning capabilities can also play a similar control role. It is here that the answer to the question of how legislators can maintain an appropriate balance between the development of artificial intelligence and the security of social systems seems to be vaguely visible. If the technological development of large-scale models is not only the object of AI governance, but can also empower AI governance in turn, then technology companies will not be worried about AI legislation. In fact, if the security research of large models can form a toolbox for testing, evaluation, and monitoring through a technology program approach, including promoting digital watermarking technology, developing AI verification small models, forming an AIGC anti-counterfeiting system, establishing an AI ethical management index system and certification platform, and compiling an AI security guarantee network, then regulation and development will no longer be a zero sum game. AI governance can also open up new investment opportunities or market space for AI research and development, and form a technological blue ocean for enterprises through differentiated competition. In other words, only when there is a certain proportional relationship between the performance improvement and security improvement of language and multimodal models, and only when regulation shifts to procedural and technological standards, can countries and the world truly enter the so-called "legislative moment of artificial intelligence governance".
On the other hand, in March 2024, more than thirty domestic and foreign technology experts and business leaders signed the "Beijing International Consensus on AI Security" in China, drawing several clear red lines for artificial intelligence research and development, and attempting to form an international cooperation mechanism based on these principles. The main contents of the Beijing Consensus include ensuring human control over the replication and iteration of AI systems, opposing the design of large-scale automated weapons, introducing a national registration system to strengthen supervision and conduct international audits in accordance with global alignment requirements, preventing the spread of the most dangerous technologies, developing comprehensive governance methods and technologies, and establishing a stronger global security network. A very interesting and intriguing contrast is that while European technology companies are concerned that the EU's AI Law, which emphasizes regulation, will inevitably consume the development of the AI industry, which accounts for 17% of investment, the Beijing International Consensus on AI Security calls on governments and companies around the world to invest one-third of their AI research and development budgets in the field of security, which seems more inclined towards regulation. The cost-benefit competition of "33% vs. 17%" seems to constitute a high ground for the discourse power of rule making. In May, the European Commission adopted the Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law, which was opened for signature by all countries on September 5th, just over a month ago, without inviting China and Singapore to participate. From this, it can be seen that the social value competition of "human rights, democracy, and the rule of law" clearly constitutes another high ground for the power of rule making. Almost simultaneously, the United Nations General Assembly passed the "Resolution on Strengthening International Cooperation in Building Artificial Intelligence Capacity" proposed by China and co signed by 140 countries on July 1. On September 22, it also passed the "Future Compact" and the "Global Digital Compact" as an attachment. The Chinese government actively participated in drafting and submitted a formal opinion letter. In any case, artificial intelligence governance is bound to become a global issue involving the fate of the entire human race. How to reach a broader basic consensus on the international cooperation system and promote the gradual coordination of diverse policy ecosystems among countries will become a global issue in the next three to five years.
Key issues in ethical regulation of artificial intelligence
Shen Wei
Professor at Shanghai Jiao Tong University Koguan School of Law, Distinguished Professor, Shanghai Oriental Scholar Distinguished Professor, and Doctoral Supervisor
Since the 21st century, the new generation of artificial intelligence has been widely applied. Artificial intelligence technology has rapidly developed in complex real-world scenarios such as image recognition, speech recognition, machine translation, and autonomous driving, and has been continuously embedded in various fields of society, profoundly changing human life, production methods, organizational forms, and social structures. However, at the same time, artificial intelligence, as an emerging technology, has also posed severe challenges to human survival and development. Isaac Asimov's famous "Three Laws of Robotics" are important guidelines for regulating new technologies and provide fundamental ideas for the development of artificial intelligence technology. A simple idea is to simplify ethics and law into a few basic principles and encode them into AI systems in a way that AI can understand, in order to achieve ethical control over AI. But in reality, the ethical and legal issues of artificial intelligence are much more complex, requiring the formation of specific solutions from principles to rules.
The "Colin Grech dilemma" in technological development is also reflected in the ethical issues of artificial intelligence, that is, the social consequences of new technologies are difficult to effectively predict in the early stages of technological life, and by the time they can be predicted, technology has often become a part of the entire economic and social structure, making it difficult to control. In response to this, Colinglich himself proposed two coping models: one is to predict and avoid damage; The second is to ensure the flexibility or reversibility of relevant decisions, including two methods: the principle of prevention and smart inaction.
At present, international organizations, countries, and various sectors of society are generally concerned about the ethical issues of artificial intelligence, and have formed a series of ethical frameworks and principles consensus, including controllability, transparency, security, responsibility, non discrimination, and privacy protection. This mainly reflects the first solution to address the Colin Griech dilemma. However, due to the fact that the threat of artificial intelligence is not yet direct, and there is fierce competition in artificial intelligence technology among countries, it is unrealistic to completely ban artificial intelligence research. For now, only vague principles can be proposed to prevent the threat brought by artificial intelligence. In terms of specific regulatory implementation plans, the European Union, the United States, and China, as the "dominant digital forces" in the current global development of artificial intelligence, have given different answers. The EU Artificial Intelligence Law focuses on human rights and personal information protection as legislative themes, providing a risk-based regulatory framework for artificial intelligence, reducing information asymmetry between producers and consumers of artificial intelligence technology, and preventing potential harm caused by artificial intelligence, which is necessary for the EU's artificial intelligence regulation. The United States has adopted a market-oriented approach, refusing to extensively regulate artificial intelligence and adopting more lenient regulatory measures to prevent AI regulation from restricting innovation or distorting the market. China pays more attention to the important role of the government in supporting the development of AI, and sets AI as an important development goal of the government. Human rights and the market are not the focus of China's AI rules. The above differences reflect that consensus on ethical rules and regulatory schemes for artificial intelligence still needs to be reached.
The following aspects are important directions for the future development of ethical rules for artificial intelligence.
Firstly, the diverse regulatory objectives of artificial intelligence require attention. Artificial intelligence regulation not only includes the goal of promoting market development, but also multiple purposes such as implementing national and social policies, promoting innovation, protecting citizens' rights, enhancing international competitiveness, and safeguarding national security, reflected in multiple considerations such as technology, market, and ethics. Although different countries have different specific plans for regulating artificial intelligence, the consensus is that AI regulation is based on the continued development of AI technology. This means that regulation needs to balance development and safety, giving equal importance to empowerment and regulation.
Secondly, the principle of prevention and smart inaction can form a new approach for regulating artificial intelligence. The principle of prevention is mainly reflected in the field of environmental law, but it is also applicable to artificial intelligence, that is, before certain serious risks become reality or quantifiable, preventive measures should be taken to prohibit or strictly restrict the development of artificial intelligence, which sets an ethical red line for artificial intelligence technology. Smart inaction constitutes another strategy for regulating artificial intelligence. Although smart inaction is also a form of inaction, it is different from passively allowing artificial intelligence technology to develop. Smart inaction means that the government needs to actively monitor and participate in emerging markets and their participants, and "observe their changes". In addition to establishing a series of ethical principles, encouraging non-governmental entities such as industry associations to govern and exercise the subjective initiative of the judiciary can serve as another specific solution to adapt to the development of artificial intelligence practices through smart inaction.
Finally, responsibility constitutes the core requirement of ethical regulation of artificial intelligence. Artificial intelligence regulation not only needs to prevent specific risks and decision-making accuracy issues brought by artificial intelligence from the results, but also needs to ensure that the responsibility for artificial intelligence decision-making is borne by humans. This is an ethical requirement for artificial intelligence regulation. The responsible regulation of artificial intelligence requires at least a certain degree of transparency and controllability, and imposes stricter requirements on the application of artificial intelligence in public decision-making.
The governance and regulation of artificial intelligence have shown a trend towards treaty based, mandatory, and hard law based approaches. For example, the Digital Economy Agreement signed between Australia and Singapore specifically stipulates that the contracting parties should reach a consensus on "artificial intelligence", including that both parties should cooperate and promote the development and acceptance of frameworks through relevant regional and international forums to support the trustworthy, secure, and responsible use of artificial intelligence technology. When developing such artificial intelligence governance frameworks, internationally recognized principles or guidelines should be considered. Similar international treaties include the Digital Economy Partnership Agreement signed by Chile, New Zealand, Singapore, and China, as well as the Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law opened for signature by the Council of Europe. The emergence of AI related international treaties helps to ease the conflict of domestic laws, coordinate the AI regulatory needs of various countries, and provide a legal basis for AI global governance, representing the development direction of AI regulation.
Comparative perspective on regulation of artificial intelligence
Qiu Yaokun
Associate Professor at Koguan School of Law and China Institute for Socio-Legal Studies, Shanghai Jiao Tong University
Although Professor Chen Xiwen's article "From Ethics to Law: Why, When, and How to Regulate Artificial Intelligence" explores the prerequisite issues of regulating artificial intelligence and attempts to propose more necessary and feasible regulatory solutions, this article has slightly different opinions on why the law should regulate artificial intelligence, especially limiting its substitution for human intelligence work. It advocates adopting a comparative perspective, comparing the relative advantages and disadvantages of different regulatory measures and governance methods, and more cautiously restricting the development and application of artificial intelligence. Regarding the current hot topic of regulation for generative artificial intelligence and even general artificial intelligence, Professor Chen generally adopts a stance of reducing regulation and encouraging development. This article also believes that comparing different types of artificial intelligence can help to better understand the opportunities and challenges brought by this emerging technology.
1、 Comparison between Law and Other Regulatory Measures
Professor Chen believes that the reason for regulating artificial intelligence is to address market failures, maintain social value, and ensure human leadership and process transparency. Although it may hinder innovation, regulation is an international trend, and the fundamental question is not whether to regulate, but when to regulate
The key issue indeed lies in the timing of regulation, but for disruptive innovative technologies such as artificial intelligence, reducing or delaying regulation by law is not only to encourage innovation, but also because other regulatory measures may not necessarily achieve the same regulatory goals. Technology developers and their affiliated enterprises always pursue transparency in the operation process of artificial intelligence and accuracy and fairness in the output results, because the opaque, inaccurate, and unfair introduction of artificial intelligence into the market will inevitably lead to public criticism and the risk of damage compensation. The personal information and intellectual property risks of artificial intelligence training data have not been ignored due to the lack of sound and complete relevant laws, and social norms and market mechanisms are also constraining the behavior of relevant entities. Even 'code is law' has implications for regulating artificial intelligence, whether it's the Singapore government's AI. Verify or IBM's Watson X. Governance, both represent attempts to regulate artificial intelligence from a technical perspective and attempt to balance security and development.
Therefore, comparing different regulatory measures such as laws and social norms, markets, and codes may better clarify the necessity and timing of regulation.
2、 Comparison between Artificial Intelligence and Human Intelligence
Professor Chen believes that "a complete ban on algorithms is unnecessary, not only because any definition of power may include calculations and other basic functions that exercise discretionary power. More importantly, it incorrectly identifies the problem. The problem is not that machines are making decisions, but that humans have given up their sense of responsibility towards them
It is true that humans need to maintain a dominant position in the decision-making process of artificial intelligence, but this does not mean that all AI decisions need to be subject to equal levels of human supervision. For some simple problems involving binary judgments of either black or white, artificial intelligence has higher accuracy and efficiency compared to human intelligence, and its power is less likely to be bought off, making it worth entrusting to artificial intelligence; In this regard, the dominant position of human intelligence lies in identifying such problems in advance and designing artificial intelligence with performance that meets requirements, rather than implementing manual supervision for the solution results of such problems, otherwise the efficiency and fairness improvement pursued by applying artificial intelligence in this regard will be lost. Some complex issues involve ambiguous legal areas, marginal value judgments, and interest balancing. Compared to human intelligence, artificial intelligence has no obvious decision-making advantages and may even have disadvantages; In this regard, we should adhere to human intelligence as the main solution, or even exclude artificial intelligence, in order to produce corresponding solutions that prohibit outsourcing or ensure human traceability in the responsibility chain.
Therefore, comparing and dividing labor between artificial intelligence and human intelligence is more conducive to fully leveraging the decision-making advantages of artificial intelligence and avoiding the decline of human dominance.
3、 Comparison of different types of artificial intelligence
Professor Chen generally supports reducing regulations on generative artificial intelligence and even general artificial intelligence to encourage development, but he has not focused too much on it. In fact, comparing different types of artificial intelligence can also help clarify how to treat generative artificial intelligence in regulation. In other words, the technological complexity, conflicts of interest, and human-computer interaction of generative artificial intelligence further exacerbate the inherent difficulties in regulating the process of artificial intelligence. However, its functions are similar to search algorithms and serve to enhance platform interests, promoting the transformation of bilateral platforms towards unilateral ones. Therefore, we should continue to adhere to a similar approach to other artificial intelligence that primarily regulates its outcomes rather than processes, and pay special attention to providing more available data for its development through legal interpretation. However, we should ensure that the greater social benefits created by relevant data can universally benefit a wider range of social groups.
The boundary of administrative functions delegated to artificial intelligence systems
Tan Jun
Assistant Researcher at Koguan School of Law, Shanghai Jiao Tong University, and Assistant Researcher at China Institute for Socio-Legal Studies
Professor Chen Xiwen's speech and recent Chinese translated book ("We, Robots? - Artificial Intelligence Regulation and Its Legal Limitations") confront the various challenges brought by artificial intelligence in the digital age, profoundly revealing several difficulties faced in artificial intelligence regulation, such as whether the regulatory purpose is to promote or restrict the development of artificial intelligence, whether the timing of regulation is preventive regulation or waiting for risk events to occur before regulation, and how to determine the red line of regulation. The relevant discussions and response measures have important enlightening significance for the current artificial intelligence legislation being promoted in China. One of the topics that Professor Chen mentioned but did not delve into in depth is whether the functions of government agencies can be outsourced to artificial intelligence. Professor Chen advocates restricting artificial intelligence from assuming inherent government functions, as once government functions or powers are outsourced to AI systems, corresponding regulation will become more difficult, and there will also be issues of decision-making legitimacy and procedural legitimacy. Unfortunately, due to limitations in topic and length, Professor Chen did not delve into which government functions cannot be delegated to artificial intelligence systems.
This issue is also crucial for the current construction of China's digital government. As is well known, China is vigorously promoting the construction of a digital government. In addition to transforming and upgrading the electronic government, some institutions have tried to apply automated or semi automated artificial intelligence systems to government governance and administrative law enforcement. For example, Shenzhen has launched the "unmanned intervention automatic approval service", Nanjing Public Security has launched the "automatic case penalty system", and Shanghai Pudong has launched the "off-site law enforcement". These different systems only require managers or executives to input specific facts or information to automatically generate corresponding results, which can replace the execution of government officials to varying degrees. Although automated or semi automated artificial intelligence systems can improve administrative efficiency, the functions and powers of administrative agencies are granted by law. Can they directly or indirectly delegate their powers to AI systems with uncertainty and interpretability? If possible, how should the boundaries of legal authorization or delegation to artificial intelligence systems be determined? How can it be achieved? These are the core issues we face in the construction of our digital government.
Different scholars have different opinions on whether government functions can be delegated to artificial intelligence systems. From Professor Chen's perspective, the power of judges and other officials with discretionary power cannot be outsourced to automated machines, and government functions that affect individual rights and obligations should still be undertaken by public officials who can be held accountable through political or constitutional mechanisms. But some scholars believe that even fully automated artificial intelligence systems have legitimacy as long as they can obtain special or general authorization from the replaced administrative agency. These two viewpoints are like two ends of a spectrum, one end is a complete affirmation of automated administration, and the other end is a complete negation of fully automated administration. On the other hand, the third viewpoint is located in the middle of the spectrum, believing that administrative matters without discretionary power and general discretionary administrative matters can be authorized by law to AI systems for decision-making, while administrative matters involving important discretion cannot be authorized. This path is more in line with the current reality in China, but further discussion is needed on which specific administrative matters cannot be authorized to artificial intelligence systems. Due to the fact that the application of artificial intelligence in government functions is still under development and related issues have not fully emerged, we can only conduct preliminary discussions from the perspective of basic principles. On the one hand, we need to maintain a cautious and optimistic attitude towards the application of artificial intelligence technology; On the other hand, important matters related to citizens' personal freedom and dignity in administrative management cannot be authorized to artificial intelligence systems. However, matters related to general economic interests can be authorized because their consequences can be remedied through administrative compensation or indemnification. In this process, we need to clarify the responsibility allocation mechanism of using artificial intelligence systems in administrative management and law enforcement, ensure the "technical due process" of management or law enforcement counterparts, and provide effective channels for rights relief by administrative and judicial organs when the rights of the public or individuals are violated. Of course, the above content is also quite general. In specific practice, it is still necessary for government departments and judicial departments to continuously refine the power boundaries of using artificial intelligence systems for administration based on the principle of administrative rule of law. I look forward to seeking further advice from Professor Chen on related issues!
Challenges and responses to court human-machine collaborative trials
Yi Junlin
Assistant Professor (Postdoctoral) at Koguan School of Law, Shanghai Jiao Tong University
In recent years, with a series of disruptive breakthroughs in big language models, the threshold for the use of artificial intelligence technology has been greatly lowered, and machine algorithms have become important collaborative partners in people's work and life. The decision-making mode of human-machine collaboration has become a new normal. Specifically, the infiltration of artificial intelligence into judicial trial activities in courts has sparked widespread attention and discussion in both theoretical and practical circles. At present, the court system in our country places great emphasis on the judge's role as the main judge in human-machine collaborative trials. In short, human judges are responsible for the outcome of the trial. On this basis, the human-machine collaborative trial in judicial practice presents two distinct characteristics. Firstly, in terms of internal division of labor, "judges are the main and machines are the auxiliary" - judges, as the ultimate judges, have the final say on how to adopt machine assisted suggestions. Secondly, in terms of external display, 'judges are in the light, machines are in the dark' - investigating judges do not need to disclose machine assisted suggestions to the public, and artificial intelligence is hidden behind human judges. It is worth noting that due to the current invisibility of machine algorithms behind judges, the attitude of judges towards machine assisted advice has become a mystery. Whether human judges "consider" or "rely" on the results pushed by machines seems to be something that only the investigating judge knows in their own hearts, which lays the seeds for arbitrariness. As the proverb goes, 'Justice should not only be achieved, but also in a visible way to people.' How to achieve justice through human-machine collaborative trials in a visible way has become an increasingly severe challenge. In other words, how to achieve consistency between the internal division of labor and external display in human-machine collaboration, and thus win the trust of the public, is a core challenge. A more radical solution to this is to change the existing division of roles between judges and algorithms, allowing artificial intelligence to move directly from behind the scenes to the front stage. For example, some experts and scholars advocate for the use of artificial intelligence to directly handle simple small cases, and to enhance the legitimacy of AI trials based on the consent of the parties involved. The author believes that the aforementioned plan is a drastic transformation of the state's judicial power, involving basic values such as judicial authority and human dignity. Therefore, it is still at the theoretical level and difficult to put into practice in reality. In contrast, returning to the basic spirit of judicial trial legal procedures while respecting the subjectivity of judges may provide us with more practical and feasible action clues. Indeed, modern legal procedures had a relatively mature basic form before the advent of the artificial intelligence era, but it cannot be denied that legal procedures contain timeless basic laws and values. Among them, the communication and discussion aspect of legal procedures is particularly noteworthy. In short, judicial trials are a competition of discourse games, and legal procedures are a process of communication and discussion. In the process of constantly refuting and arguing discourse techniques, the diversity of solutions will gradually decrease until a universally recognized or accepted solution is finally found, at least the only judgment answer. In the era of artificial intelligence, the fundamental value of communication and discussion has not changed, but artificial intelligence is gradually becoming a new communicator that cannot be ignored in judicial trials. The fundamental changes and challenges brought about by human-machine collaborative trials ultimately lie in the communication and interaction between humans and between humans and machines. In this sense, although artificial intelligence cannot become the "subject of judgment", it can still be formulated as the "communication subject" in legal discussions, and thus assigned a more clear place in the judicial trial process.
How does the law implement the principled consensus of AI ethics?
——A short review of the article 'From Ethics to Law: Why, When, and How to Regulate AI'
Zhao Zerui
Assistant Researcher at Koguan School of Law, Shanghai Jiao Tong University, and Assistant Researcher at China Institute for Socio-Legal Studies
The social transformation triggered by AI has revived discussions on how law and ethics interact. Although the discussion on the relationship between law and ethics, which began in the late 19th century and began in the 5th century BC, gradually declined due to the decline of natural law, the rise of analytical jurisprudence also completely clarified the boundary between law and ethics. [See [American] Roscoe Pound: "Law and Ethics", Beijing: The Commercial Press (2016 edition), pages 4 and 34.]. ]But as Pound said, "Law cannot be far removed from ethical customs, nor can it lag too far behind. Because law does not automatically come into effect. It must be initiated, maintained, and guided by individual individuals; it must be motivated to take action and determine its direction by something more comprehensive than the abstract content of legal norms... The core parts of law and ethics are completely different, but their edges overlap with each other." [American] Roscoe Pound: "Law and Ethics", Beijing: The Commercial Press (2016 edition), p. 90. ]The ethical and legal issues arising from AI have brought renewed attention to the discussion of how the two are related. On the one hand, countries around the world need to propose their own AI ethical principles to abstractly and comprehensively guide people in managing AI risks, laying a consensus tone and goal for AI legislation in various countries; On the other hand, governments around the world must enact AI legislation to translate the principled consensus on AI ethics into practical and actionable regulatory regulations.
The article "From Ethics to Law: Why, When, and How to Regulate AI" comprehensively summarizes the current status and challenges of AI ethics, and deeply reflects on the lack of interdisciplinary research between AI ethics and law. It refers to the current discussion on AI ethics, which focuses too much on finding consensus based normative principles, while there is too little discussion on how these seemingly consensus based normative principles can be legally implemented and whether it is necessary to do so. This has led to a global consensus among countries on six AI governance principles, including "human control," "transparency," "security," "accountability," "non discrimination," and "privacy," but there has been no progress on how to implement these principles through legal means. The root cause of this situation lies in the uncertainty of artificial intelligence. The uncertainty of artificial intelligence forces these ethical principles to be applied to technologies that are not yet known and to address unforeseen issues, making it difficult for the law to translate abstract and comprehensive ethical principles into concrete and practical regulatory measures. This article responds to this dilemma from three aspects: why AI ethics need to be regulated by law, when law needs to regulate AI ethics, and how to implement the legal implementation of AI ethics.
Should laws regulate the uncertainty faced by AI ethics? The article argues that at the substantive level, AI ethics can provide a moral foundation for legal regulation, such as prohibiting biased AI used in weapons to promote fairness and defend human rights. At the procedural level, AI ethics can also provide necessary prerequisites for legal regulation, such as transparency and interpretability, making legal regulation and accountability possible. Therefore, the real issue facing the uncertainty of AI ethics when laws are implemented is not whether they should be regulated, but when they should be regulated. In this regard, the article points out, citing the Collinridge Dilemma, that in the face of potential future risks of new technologies, most existing research predicts and avoids them based on ethical principles. Although it is broad enough to adapt to technological changes, it also has the shortcomings of being vague and unable to provide guidance for specific cases. Therefore, this article addresses the issue of legal regulatory timing for AI ethics from the perspectives of prevention principles and appropriate inaction. The article explores the legal implementation of AI ethics from three perspectives: binding orders, fiscal or other incentive measures, and market forces. It points out that multiple regulatory entities and tools will be better able to address the uncertainty of AI ethics than a single regulatory agency and a single regulatory measure. In addition, the article also points out that the law needs to promote the implementation of AI ethics through three aspects: setting risk management measures, delineating legal red lines, and restricting AI outsourcing.
In summary, based on a reflection on the current state of AI ethics, this article provides insightful discussions on why, when, and how regulatory agencies promote the legal implementation of AI ethics. Several highlights in its discourse are worth further consideration. Firstly, how can the law respond to the uncertainty of AI? The uncertainty of AI is not only reflected in ethical aspects, but also in its conceptual connotation, technological path, application mode, and governance rules, which will dynamically change with the development of society. How to regulate AI legislation through static "governance" has become a problem that contemporary law must respond to. This article provides many valuable ideas for reference and inspiration. Secondly, there is the issue of diversity in AI governance. AI governance involves both the government and the market, as well as the general public. How to establish the roles of these three parties in AI governance and how to effectively coordinate them to make appropriate, reasonable, and accountable governance decisions will be key issues that need to be considered in AI legislation. Finally, there are substantive and procedural rules in AI legislation. This article discusses that part of AI ethics is the substantive principles that provide physical goals or moral foundations for legal regulation, such as "human control" and "non discrimination", while the other part is the procedural principles that provide accountability prerequisites and communication channels for legal regulation, such as "transparency". This inspires us to consider how to properly divide and cooperate between substantive and procedural rules in AI legislation to promote the diversity and agility of AI governance. This will determine the path through which China's AI legislation will achieve legal "governance" through stillness.