[author]DING Xiaodong
[content]
DING Xiaodong
Abstract: my country's artificial intelligence legislation needs to introduce a global comparative perspective. At present, the US artificial intelligence legislation emphasizes market dominance and corporate self-regulation, and only regulates in the fields of export control, information sharing related to national security, civil rights protection, consumer protection, etc. The EU is eager to exert the Brussels effect, unify legislation and risk regulation on artificial intelligence, and prepare to include artificial intelligence systems in the scope of product liability and establish special burden of proof. my country's artificial intelligence legislation should adhere to the scenario-based regulation approach, not rush to unify legislation, and wait until the time is ripe to formulate a comprehensive artificial intelligence law. my country's artificial intelligence law can be developed from three levels: general principles, public law, and private law. Its general principles should reflect the value concepts of development, equality, openness, and security; its public law regulation should target major public risks, apply scenario-based regulation to other risks, respect industry autonomy, and prevent departments from overstepping their authority in legislation; its private law system should impose product liability on terminal products rather than the artificial intelligence system itself. It can formulate special infringement rules for artificial intelligence, but it should first accumulate relevant experience through judicial practice.
Keywords: artificial intelligence legislation; risk regulation; scenario-based regulation; artificial intelligence infringement; product liability
Artificial intelligence legislation faces several classic challenges. First, artificial intelligence legislation needs to balance development and security, vitality and order. Faced with various risks brought by artificial intelligence, the law must prevent artificial intelligence from developing in "lawless places"; it must also promote the development of artificial intelligence technology and benefit the people, and avoid the "greatest insecurity" of not developing it. Second, there is a tension between the rapid development of artificial intelligence and the relative stability of the law. In theory, this tension is summarized as the Collingridge dilemma or the so-called "pacing" problem, that is, the rhythm and pace of legal and technological development are very different. Third, there is a tension between the scenario of artificial intelligence issues and the generality of the law. Artificial intelligence legal issues involve both public law and private law, and there are different problems in different scenarios, such as risk regulation brought about by the application of artificial intelligence to various products and services, personal information protection, speech infringement, consumer protection, the legality of artificial intelligence training data, and the ownership of copyright of artificial intelligence works. However, traditional legislation is mainly based on departmental laws or industry laws, emphasizing the unity and coherence of the legal system. Fourth, artificial intelligence also has problems of global competition and coordination. As a representative of new productivity, how AI legislation can lead the world in both AI technology and rule-making has become a strategic issue for all countries. For my country, how to legislate on AI and whether a unified AI law should be formulated have also become the focus of discussion among all parties.
This article analyzes the above issues from a comparative perspective, focusing on the AI legislation of the United States, the European Union and my country for comparative analysis. The reason for choosing a comparative perspective is that, in addition to the fact that the global competition and coordination of AI mentioned in the fourth point above inevitably involve different countries and regions, another reason is that AI around the world faces similar problems as mentioned above. For example, AI legislation and policy-making in all countries have mentioned the balance between development and security without exception. There have been debates among countries about the timing and rhythm of AI legislation, and different solutions have been proposed for the uniformity and scenario-based nature of AI legislation. The reason why the United States, the European Union and my country are used for analysis is that China, the United States and Europe have become the most important countries and regions in the field of AI and AI rule-making. In the words of Professor Anu Bradford, these three countries and regions are globally recognized as "dominant digital forces."
Specifically, the first and second parts of this article analyze the AI legislation of the United States and the European Union respectively, pointing out that the United States and the European Union have adopted similar but actually very different legislative approaches. The third part, starting from the comparative perspective of the United States, Europe and other countries outside the region, summarizes the general principles of AI legislation from the three dimensions of legislative value position, public law and private law, pointing out that the US AI legislation has the drawbacks of hegemony and exclusivity, but its development orientation, pragmatism and scenario-based regulatory approach are consistent with my country; while the EU AI legislation has the trap of value superiority and over-regulation, but its position on protecting vulnerable groups is also reasonable. The fourth part summarizes the current status of AI legislation in my country, and analyzes my country's future AI legislation, especially the overall AI law. In the face of US-EU coordination, my country's AI legislation should be guided by the value concepts of development, equality, openness and security, and small-scale regulation should be carried out on major public risks, and enterprise self-regulation and private law ex post regulation should be applied to other risks. The fifth part summarizes the entire text, pointing out that my country's artificial intelligence legislation should maintain strategic patience, reserving exploration space for the development of artificial intelligence and accumulating sufficiently in-depth academic research and experience for future artificial intelligence legislation.
1. AI legislation in the United States
In terms of AI industry and technology development, the United States is in a leading position in the world, and the US legal system plays a key role. As some scholars have said, the reason why the United States leads the world in the fields of the Internet and information technology is largely because "the law created Silicon Valley". In addition, the US legislative system is relatively complex, and domestic academic circles often misinterpret some US legislation. Therefore, it is necessary to first explain the US AI legislation.
1.1 Federal level
In many Chinese papers and online articles, it is often seen that the United States has formulated or passed "such and such AI bills", for example, it is mentioned that the United States has formulated the Algorithmic Justice and Online Platform Transparency Act in 2021, the Filter Bubble Transparency Act, and the Algorithmic Accountability Act in 2022. But in fact, none of the above proposals have passed the vote of the US Congress and have not become law. So far, the United States has not passed or is close to passing any AI bill at the federal level. In the US legislative system, on average, less than a few percent of congressional proposals (bills) eventually become laws (laws). Introducing a large number of proposals raised by US congressmen as laws into China may cause misunderstandings of US AI legislation in the Chinese academic community.
In the foreseeable future, it is unlikely that the US Congress will carry out comprehensive AI legislation at the federal level. There are several reasons for this. First, the US legal and political culture has always been highly inclusive of technological innovation and has always been cautious about comprehensive technology regulation legislation. As a representative of the United States' "new productivity" and national competitiveness, it is unlikely that the United States will regulate it through legislation in the short term. Secondly, the legal issues brought about by artificial intelligence involve all aspects. Compared with personal information protection laws or information privacy laws, artificial intelligence laws are far from forming a unified legal tradition and a mature institutional framework. Even in the field of personal information protection laws, where the system is already very mature and most of the world has already legislated, the US Congress has not passed personal information protection legislation. In the field of artificial intelligence, it is even less likely that the United States will hastily legislate at the congressional level.
At the federal level, the existing artificial intelligence "legislation" in the United States is limited to executive orders issued by the US President. The reason why these artificial intelligence "legislation" are put in quotation marks is that the president's authority to issue executive orders either comes from the executive power granted by Article II of the US Constitution or from the legislative authorization of the US Congress. These executive orders may have legal coercive force and effect, but they are not laws in the strict sense. Overall, the US executive orders are mainly concentrated in the fields of national security, military, foreign relations, etc. In the past few years, the United States has formulated a series of executive orders involving foreign relations in the field of artificial intelligence through executive orders, involving artificial intelligence export controls, investment, cross-border data, etc. In terms of domestic legislation, the executive orders issued by the US President mainly issued normative guidance on the application of artificial intelligence by federal regulatory agencies, and the regulations on enterprises were mainly limited to the information sharing responsibilities in artificial intelligence applications involving national security.
Taking the executive orders issued by the Trump and Biden administrations as examples, the Trump administration's Executive Order No. 13859, "Maintaining American Leadership in Artificial Intelligence" and Executive Order No. 13960, "Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government", mainly regulated the use of artificial intelligence by US regulatory agencies. The Biden administration's Executive Order No. 14110, "On the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence", also mainly required US federal regulatory agencies to formulate standards and norms in the fields of criminal justice, education, health care, housing and labor. For AI companies that operate on the market, Executive Order No. 14110 cites the Defense Production Act, enacted in 1950, requiring companies to share information with the federal government on AI model training that poses serious risks to national security or public health and safety. The Defense Production Act gives the president the power to order companies to produce goods and provide services to support national defense, and the Biden administration is trying to cite this law to impose limited regulation on AI companies.
Other AI policies, standards and statements by US federal agencies are also not legally binding or do not constitute new legislation. For example, the Blueprint for an AI Bill of Rights issued by the White House Office of Science and Technology is a non-binding framework to protect citizens from AI-based discrimination and ensure privacy and security. The US Federal Trade Commission has issued a series of statements and opinions on AI, but these statements and opinions mainly apply consumer protection laws in the field of AI and algorithmic decision-making, rather than new or comprehensive legislation on AI. The AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST) under the US Department of Commerce is a relatively comprehensive standard for AI. The bill provides general provisions for AI risks and constructs a risk management framework based on the AI life cycle. However, this framework is a document that is implemented voluntarily by enterprises and does not have legal or standard enforcement. Although the National Institute of Standards and Technology still has the title of "standard", it has long been "un-standardized" in software, AI and other fields, relying on the self-regulation of enterprises rather than the government to formulate a single standard to regulate related risks.
1.2 State-level legislation
State-level legislation in the United States is relatively easy to pass. At present, some states such as Colorado and Utah have passed laws on artificial intelligence or related to artificial intelligence. In general, the regulation of artificial intelligence by these state-level legislations in the United States is mainly limited to areas such as consumer protection, anti-discrimination, and civil rights protection, and the obligations imposed on artificial intelligence companies are still very limited. In contrast, California, which has a large number of large Internet and artificial intelligence companies, is currently focusing on major public safety risks in its ongoing legislation. The regulatory thresholds it sets exclude most companies from the regulatory objects, and it adopts a system that seems to be mandatory regulation but is actually based on corporate self-regulation and compliance exemption.
Take the Colorado Artificial Intelligence Act as an example. This law is considered to be the first state in the United States to regulate high-risk artificial intelligence. The law stipulates that if artificial intelligence affects consumers' educational enrollment or educational opportunities, employment or employment opportunities, financial or lending services, essential government services, health care services, housing, insurance, legal services, or has a legal impact or similar significant impact, then such artificial intelligence systems will be considered high-risk artificial intelligence systems. From the perspective of the scope of legal regulation, the Colorado AI Act has been influenced to some extent by the EU AI Act to be discussed below, both of which regulate or focus on high-risk AI. However, unlike the EU, the Colorado AI Act clearly defines this law as "consumer protection involving interactions with AI systems" rather than regulating AI in all fields.
From the perspective of setting obligations for high-risk artificial intelligence, the Colorado Artificial Intelligence Act requires its developers and deployers to take reasonable care to prevent consumers from being discriminated against by algorithms. For example, developers should provide information to deployers for impact assessment; notify the Attorney General and deployers within a time limit after discovering the risk of algorithmic discrimination; and publish on the website a public statement of high-risk artificial intelligence, foreseeable risks of algorithmic discrimination, and the purpose, expected benefits, and uses of the artificial intelligence system. Deployers should implement risk management policies and plans, complete impact assessments of artificial intelligence risks, inform consumers of the use of artificial intelligence, and notify the Attorney General within a time limit after discovering algorithmic discrimination; publish on the website the high-risk artificial intelligence systems they are deploying, as well as any known or reasonably foreseeable risks of algorithmic discrimination. When the developers and deployers of high-risk artificial intelligence systems meet the above conditions, it can be assumed that they have fulfilled the reasonable care obligation to avoid algorithmic discrimination.
Other states that have already legislated on artificial intelligence have also taken the approach of consumer protection or civil rights protection, extending traditional legal requirements such as consumer right to know and anti-discrimination to the field of artificial intelligence. However, compared with Colorado, the responsibilities imposed by other states are currently lighter. For example, the Utah Artificial Intelligence Policy Act mainly stipulates the disclosure obligations when using generative AI. For "regulated occupations" such as lawyers and doctors, if these groups use generative AI to interact with consumers, they must disclose it to consumers in a "prominently" manner; entities that are not "regulated occupations" but are subject to the Utah Consumer Protection Act must "clearly and conspicuously" disclose their use of AI.
California's legislation focuses on promoting the development of AI and regulating the major public risks brought about by AI. Currently, California's SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is being reviewed by the California State Assembly. The bill includes AI models with computing power greater than 10 to the power of 26 as "covered entities" and requires developers of such models to develop safety standards to prevent cyber attacks on critical infrastructure that cause mass casualties and losses of more than $500 million "using chemical, biological, radiological or nuclear weapons", violations of criminal law that cause losses of more than $500 million, and similar public safety harms. However, it should be noted that the regulatory threshold of the bill is actually very high, because AI models with computing power greater than 10 to the power of 26 have actually basically excluded existing AI models. At the same time, the bill also sets a "limited duty exemption" for regulated models. These exemptions provide that when the developers of regulated models meet the ten obligations of risk management, they can obtain exemptions from legal liability. In general, the regulatory objects and risks prevented by California's ongoing legislation are very limited, and basically take the position of industry autonomy and compliance exemption.
2. AI legislation in the EU
Unlike the decentralized and market-oriented AI legislation approach of the United States, the EU has adopted a legislative approach to unified regulation of AI. At the level of public law supervision, the EU first formally passed the Artificial Intelligence Act in 2024, establishing a regulatory approach to unify and classify AI risks. In the field of product liability and infringement, the Product Liability Directive and the AI Liability Directive that the EU is drafting establish the product liability of AI systems and the special burden of proof for AI infringement.
2.1 Unified risk regulation
The EU Artificial Intelligence Law divides artificial intelligence systems into two categories: product-constituent artificial intelligence systems and decision-making auxiliary artificial intelligence systems independent of products. The former include artificial intelligence systems contained in products such as medical devices, autonomous driving, ships, and toys, and the latter include artificial intelligence systems used in various application scenarios such as recruitment, enrollment, immigration screening, and law enforcement inspections. For these two types of artificial intelligence systems, the EU Artificial Intelligence Law recognizes their significant differences, namely, the former produces product-related damages of safety, personal and property, while the latter mainly produces infringements of basic rights. However, the EU Artificial Intelligence Law still puts these different types of artificial intelligence systems together for unified regulation, and does not distinguish between product-type and decision-making auxiliary artificial intelligence systems, nor does it distinguish between artificial intelligence systems applicable to the public sector and the private sector.
In terms of risk classification, the EU Artificial Intelligence Law also conducts unified classification, dividing risks into four categories: prohibited, high-risk, limited-risk, and minimal-risk, and focuses on high-risk regulations. Among them, the prohibited risks mainly include the use of subliminal techniques to distort the behavior of individuals or groups, causing significant harm, the use of artificial intelligence to endanger specific vulnerable groups, social scoring, and the use of artificial intelligence for specific invasive law enforcement. High risks mainly include artificial intelligence systems applied to the scope of existing EU product safety laws, as well as the application of artificial intelligence systems in specific areas such as biometrics, critical infrastructure, education and vocational training, employment, worker management and self-employment opportunities, access to and enjoyment of basic private services and public services and benefits, law enforcement, immigration, asylum and border control management, judicial and democratic processes.
For high-risk artificial intelligence, the provider of the artificial intelligence system needs to assume a series of responsibilities, such as risk management systems, data governance, technical documents, record keeping, transparency and information provision to deployers, human supervision, as well as accuracy, robustness and network security before entering the market. After entering the market, a "post-market monitoring system" commensurate with the nature of artificial intelligence technology and the risks of high-risk artificial intelligence systems should be established and recorded. Relatively speaking, deployers of high-risk AI systems need to assume relatively fewer responsibilities, such as ensuring that appropriate technical and organizational measures are taken to use these systems in accordance with the instructions for use attached to the systems; ensuring that human monitoring is carried out by natural persons; ensuring that input data is relevant to the intended use of the high-risk AI system and is sufficiently representative; monitoring the operation of the high-risk AI system in accordance with the instructions for use and providing information to suppliers when necessary; and keeping automatically generated logs, etc.
With the explosive development of generative artificial intelligence and large models in recent years, the EU Artificial Intelligence Law also proposed the concept of "general purpose AI model" during the revision process and made special provisions for it. Providers of this type of artificial intelligence need to assume responsibilities such as providing technical documents, disclosing information and documents to providers of downstream artificial intelligence systems, providing summaries of the content used for model training, and cooperating with competent authorities. In addition, if such artificial intelligence has "high impact capabilities", it may become a "general-purpose AI model with systemic risk" and need to assume special responsibilities, such as conducting model evaluation, assessing and mitigating systemic risks, tracking, recording and reporting relevant information and possible corrective measures for serious incidents, and ensuring a sufficient level of cybersecurity protection.
2.2 Product Liability and Tort Liability
Corresponding to the public law regulation of artificial intelligence, the EU has also made special legislation on artificial intelligence in terms of product liability and tort liability. In terms of product liability, the European Commission proposed a new draft product liability directive in 2022. This draft product liability directive intends to update the "Product Liability Directive" that has been implemented for 40 years to adapt to the new changes in the digital age. According to this draft, software and independent artificial intelligence systems will be included in the category of "products" for the first time, and the manufacturers of these products will bear strict liability for defective products. If non-independent artificial intelligence systems are integrated into products or are indispensable for the functioning of products, then such non-artificial intelligence systems constitute "related services" and become "components" of products. For product components, their manufacturers also need to bear strict product liability.
In terms of tort liability, the European Commission proposed the "Artificial Intelligence Liability Directive (Draft)" in 2022, which stipulates artificial intelligence infringement based on the principle of fault. This draft can be regarded as the tort law supporting legislation of the "Artificial Intelligence Law", focusing on the tort liability of high-risk artificial intelligence, especially the burden of proof in high-risk artificial intelligence infringement. According to this draft, if the AI system does not comply with the EU law or national laws regarding the duty of care of AI systems, then there is a rebuttable causation between the output of the AI system or the failure to output normally and the related damage, that is, the law will assume that there is a causal relationship between the two, but this causal relationship can be overturned. In addition, national courts will have the right to require the provider or user of high-risk AI to disclose relevant evidence and information to balance the plaintiff's information capabilities in such infringements.The relationship between the new draft product liability directive and the "Draft AI Liability Directive" is extremely complex, and some of the intersections have caused sharp criticism from many scholars. Due to the limited space, this article does not discuss these issues in detail, but only points out the overall intention and thinking of the EU for the two legislations. The new draft product liability directive is more inclined to achieve the unification of the laws of EU countries to promote the free flow of products in various countries, so it adopts a strict product liability legislative approach. The "Draft AI Liability Directive" is still based on the tort law of EU countries, so it still takes the approach of fault liability and only makes special provisions on the burden of proof. Judging from the content of the two laws and the current legislative process, the new draft of the Product Liability Directive is likely to be passed first and become law, and its application will largely exclude the application of the AI Liability Directive. However, since the EU's remedies for product liability are limited to the right to life, bodily integrity, health and property, for infringements such as infringement of personal privacy, insult and defamation caused by AI, if the "AI Liability Directive (Draft)" is passed in the future, this legislation focusing on the burden of proof and the tort laws of EU countries will still be able to play a certain role in AI infringements.
3. The Principles of Artificial Intelligence Legislation from a Comparative Perspective
From a comparative perspective, the general principles of AI legislation can be analyzed. Starting from the US and European legislation mentioned above, this section analyzes the legislative value, public law regulation, infringement and product liability of AI in combination with the legislative practices of other countries and regions outside the region.
3.1 Legislative value
At the legislative value level, although both the US and EU legislation emphasize the balance between the development and safety of artificial intelligence, their orientations are quite different. As far as the US is concerned, the US has adopted a market-led and innovation-driven approach to the risks of domestic artificial intelligence development. The US law, politics and culture have always trusted the power of the market and supported enterprises to provide consumers and the market with better quality and more diverse products through market competition. On the issue of risk, the US has always been inclined to the risk prevention concept of individual responsibility and has been vigilant against excessive government intervention in the market. Under the guidance of this concept, it is not difficult to understand why the US artificial intelligence legislation is mainly aimed at government regulatory agencies, and is limited to consumer right to know and anti-discrimination protection for enterprises. Regulating the application of artificial intelligence by government agencies can maintain the government's credibility and protect the basic rights of citizens; while the consumer right to know can effectively guarantee individuals to make effective choices and promote the effective operation of the market mechanism; anti-discrimination in specific areas can promote employers, enrollment units and other entities to make decisions that conform to market rationality.
In terms of foreign-related rule of law, the US artificial intelligence legislation has adopted multiple value orientations of active competition, international coordination and containment. First, the United States has always been skeptical of the EU's excessive government regulation, and this is also true in the field of AI legislation, which makes it impossible for the United States to imitate the EU in AI legislation. Even some of the "blue states" in the United States whose ideology is closer to the EU, such as Colorado and Utah mentioned above, have only borrowed a very small number of institutional tools from the EU in their legislation, and there is still an essential difference from the EU's comprehensive AI regulation. Secondly, in order to exert its international leadership, especially to counter China's influence, the United States has also actively exported its influence in rule-making through organizations such as the Organization for Economic Cooperation and Development (OECD) and the G7. In particular, since the Biden administration came to power in 2020, in order to reverse the isolationism of the Trump administration, the Biden administration has strengthened the coordination of rules with the EU in the fields of science and technology and AI. Finally, in order to compete with China, the United States has also taken the path of "small courtyard and high wall" in the field of AI, reflecting the closed and hostile Cold War mentality of the United States.
In contrast to the United States, the EU pays more attention to the security of AI itself in terms of value orientation. The EU has a long history of this value orientation. From agricultural products to industrial products and human rights protection, the EU has always boasted of its "high standards and strict requirements" in almost all fields. For example, the EU's agricultural products have stricter food safety standards than the United States, and the EU also believes that it has the highest human rights protection standards in the world. It is under this confidence that the EU has carried out a series of legislation in the fields of the Internet, personal data and artificial intelligence, believing that such legislation can play the Brussels effect and elevate EU law to the global gold standard. In the fields of agriculture and industry, such strategies have indeed achieved great success. For example, although the EU's agriculture and industry as a whole cannot compete with the US market-oriented model, this strategy of the EU has played an important role in many subdivisions, especially in the field of high-end consumption. For example, many of the EU's agricultural products, catering, cosmetics, and industrial products with high safety performance requirements have strong hard competitiveness, and its rule-making also has a strong soft global influence.
However, in the digital field, the EU's hard power influence in the fields of industry and technology has gradually faded, and its influence is concentrated in the soft power field of rule-making. In the field of digital economy, including artificial intelligence, the global industries and technologies are currently mainly concentrated in the United States and China, especially in the United States, while the EU has almost no particularly influential digital enterprises and leading technologies. This situation of the EU makes its legislation in the digital field a double-edged sword. For countries and regions with similar digital industry situations to the EU, it is a relatively simple and easy legislative model to follow the EU model to legislate, which can not only effectively regulate the digital industry, but also use legislation to effectively supervise large-scale Internet and technology companies outside the region. However, for countries and regions with relatively developed digital industries or those striving to develop them, EU legislation will bring great obstacles to the development and innovation of digital technology.
In the field of AI legislation, it can be predicted that other countries and regions outside the region will not follow the EU soon. At present, countries and regions such as Singapore and India have issued some AI policies and documents. However, these policies and documents are non-mandatory. For example, Singapore's "Model AI Governance Framework" and India's "Responsible AI" are both recommended and advocacy norms, and have no legal force on enterprises. At present, the development of global AI is in the ascendant, and its technology is at a critical stage of development. All countries regard it as an important component of national competitiveness and development opportunities. At this time, it is unlikely that countries will follow the EU to enact comprehensive legislation. This is very different from the field of personal information protection. In the field of personal information protection, the reason why most countries in the world follow the EU is mainly because the institutional framework of personal information protection law has been basically finalized, and personal information is closely related to the basic rights of citizens and cross-border data flow. Therefore, the world has launched a large-scale legislation with the EU General Data Protection Regulation (GDPR) as the model. In the field of AI, countries have not yet reached this consensus on value orientation.
3.2 Risk Regulation
In terms of risk regulation, regulating AI in different fields and scenarios is more in line with the basic principles of AI legislation. The reason is that AI is deeply embedded in different departments, industries, products, and services, and its risks vary in different fields and scenarios. For example, in terms of the public-private division, the application of AI by the public sector, especially the administrative sector, will bring greater risks. When the public sector applies AI to judicial trials, law enforcement supervision, and school enrollment, such AI systems will directly affect the right to know and due process rights of individual citizens. When AI systems make unreasonable decisions, such decisions may have an adverse impact on a wide range of subjects, affecting both the legitimate rights of individual citizens and the public's trust in the government. The impact of the private sector or enterprises on AI is different. Even if AI systems have an adverse impact on users or consumers, such impacts are mainly limited to consumer rights in the market environment, and consumers can also offset such impacts by choosing products or services provided by other companies. At the same time, under the pressure of market competition, the private sector is often more agile than public institutions and can respond to consumer complaints and demands in a timely manner; the private sector will have stronger motivation and ability to correct related problems for the negative effects that artificial intelligence systems may bring.
The risks of artificial intelligence systems in different industries, products, and services are also different. For example, in terms of industry, the artificial intelligence systems in the civil aviation industry and the automotive industry are very different. The artificial intelligence system in civil aviation mainly involves the safety of flight itself, and its tolerance for risk is very low, and it mainly considers the safety of civil aviation aircraft itself; but the tolerance for risk of artificial intelligence systems in the automotive industry is relatively high, and it mainly considers the risk issues between cars and between cars and pedestrians. In terms of products, products in the same industry have different risk tolerance for artificial intelligence systems. For example, the risks faced by artificial intelligence medical devices used in heart surgery and artificial intelligence medical devices used in general auxiliary medical care are different for the same medical products. As far as services are concerned, the risks of AI systems used in professional services and non-professional services are also different. For medical, legal and other service industries that are traditionally regulated by the state, the use of AI may bring greater risks, but for general voice chat services, such risks are much smaller.
Due to the embedded characteristics of AI systems, a more reasonable approach is to restore AI risk regulation to different fields and scenarios for regulation, rather than to regulate it uniformly as an independent object. For scholars studying AI, this may sound a bit disappointing, because it will mean that AI regulation will be combined with traditional legal fields and legal departments. However, this approach is more in line with the scenario characteristics of AI and can avoid unrealistic unified regulation of AI. If AI is uniformly regulated and a unified risk classification is implemented, then the law will inevitably become highly uncertain and difficult to operate, and will eventually return to the field, industry and departmental law regulation of AI at the operational level. Imagine if AI in medical devices and AI in autonomous driving were regulated uniformly, and regulators or professional assessors were required to conduct risk assessments on both types of AI systems, they would inevitably turn to experts from the medical and automotive industries again.
In this regard, the EU's AI regulatory path is not reasonable. The EU's AI risk regulation is too focused on the uniformity of its form, and AI risks in different industries and fields are uniformly divided into different levels. But this division is actually very arbitrary, and it will inevitably face huge controversy in future risk assessments. The EU's risk regulation of general AI also illustrates this point. After the 2021 EU AI draft law proposed four types of risks: prohibitive risk, high risk, limited risk, and minimum risk, the emergence of generative AI has left the EU helpless, not knowing which type of risk to classify this type of risk. In desperation, the EU had to revise the draft AI law, separate the "general AI model" as an independent type of AI, and introduce the risk classification of "systemic risk". Regardless of whether the obligations imposed on "systemic risk" AI are reasonable, the "general AI model" alone shows the dilemma faced by the EU in unified regulation and unified risk classification of AI in terms of risk classification and grading. With the emergence of various new AI technologies and the changes in risks in different scenarios in the future, it is difficult to imagine that the EU can uniformly and scientifically grading different types of AI risks.
Relatively speaking, the narrow legislative approaches adopted by the United States and some countries and regions outside the region are more in line with the general principles of AI risk regulation. Compared with the EU, this legislative approach does not pursue unified regulation of AI, but relies on existing fields, industries and systems to regulate AI. In addition to being able to effectively connect with the existing regulatory system and achieve the stability and operability of the legal system, this approach can also more scientifically and reasonably define AI risks in different scenarios. In fact, even the EU has recognized the scenario nature of AI risks to a certain extent. For example, the EU Artificial Intelligence Law divides AI systems into product-constituent AI systems and independent AI systems; for independent AI systems, the European Commission can also adjust the catalog of high-risk AI applications listed in its Annex III. These regulations show that the EU has recognized the scenario nature and dynamic nature of AI systems at the legislative level. In the future, at the law enforcement and judicial level, the EU is also very likely to communicate with regulators and experts from different fields, industries and departments to implement different risk regulations for different scenarios.
3.3 Product Liability
It is also unreasonable to apply product liability to artificial intelligence systems. In simple terms, product liability can be discussed from different perspectives: first, product manufacturers have the responsibility to provide consumers with safe products, which is like an implied warranty in contract law; second, from the perspective of law and economics, product manufacturers can avoid infringement accidents at the lowest cost, and merchants in the industrial chain, especially product manufacturers, can more easily avoid accidents. As the "least cost avoider" of accidents, it is more reasonable for product manufacturers to bear responsibility; third, product infringement is not just individual infringement, but a large-scale public infringement, which is harmful to public safety; fourth, from the perspective of fault and remedy, manufacturing defective products is a legally recognized fault, and victims have the right to obtain relief.
However, through the analysis of the nature and characteristics of artificial intelligence, it can be found that these discussions cannot impose strict product liability on artificial intelligence products. Artificial intelligence systems cannot guarantee that they will not make mistakes like traditional products. A large number of cases and practices related to software science and operating systems show that program errors (bugs) in software and operating systems are inevitable. Product hardware can achieve product safety through material stacking and reinforcement, but software and operating systems cannot be solved by these solutions. Simple program stacking and repetition will not reduce the probability of accidents. The provider or manufacturer of the artificial intelligence system is not the "minimum cost avoider" of accidents or the best subject for preventing public safety. For artificial intelligence systems that constitute products, what risks and benefits such artificial intelligence systems will bring, whether and how to avoid them, often depends on the characteristics of the embedded products, rather than on the artificial intelligence system itself. For example, the accuracy and efficiency of medical artificial intelligence systems should be judged in combination with the use and purpose of specific medical products. To avoid such accidents, the terminal product manufacturers and users of medical devices should know more about the relevant information than the providers of artificial intelligence systems.
Even if the law includes artificial intelligence systems in strict product liability on paper, it will inevitably be closer to fault liability in practice. In the development of software, countries have tried to standardize software similar to physical products and impose strict product liability on it similar to physical products. However, later practice has proved that such management and liability do not conform to the characteristics of software, and the laws of various countries still adopt fault liability for software infringement as a whole. Strict liability for software is limited to very few cases, such as aeronautical charts that assist aircraft navigation. It is reasonable to impose strict liability on such software because of its high standardization and extremely high safety requirements. But for most other software, its nature is closer to complex information services implemented in the form of code. For information services, a more reasonable approach is to apply contractual liability, fault liability in tort law, and ethical responsibility based on government supervision or industry autonomy. Artificial intelligence systems have certain similarities with software systems, but they have stronger variability or "emergent" characteristics, are more dependent on specific products or scenarios, and are therefore more difficult to achieve product safety like physical products.
Avoiding the imposition of product liability on artificial intelligence systems does not exempt the relevant entities from liability in private law. In addition to imposing contractual liability, fault liability in tort law, and ethical liability on the providers of artificial intelligence, the law can still impose strict product liability on the terminal products constituted by artificial intelligence. It is more reasonable to impose strict liability on the manufacturers of terminal products, especially large enterprises. In terms of risk control in the upstream industry, these companies are fully capable of identifying the risks of artificial intelligence systems and using contractual mechanisms to prevent risks, without having to avoid risks through implicit safety guarantees in product liability law. In terms of downstream risks, consumers are also directly exposed to these large companies. Making these companies bear strict liability can better protect consumer rights and maintain public confidence in artificial intelligence products.
In addition, different fields, industries and departments can set safety supervision and market access standards for different products, which will provide standards or references for AI infringement. For example, if the regulatory authorities of the automotive industry determine that self-driving vehicles with an accident rate less than half of the accident rate of human driving are safe vehicles, then the court can use the safety standards determined by the regulatory authorities as the basis for infringement judgments, and at the same time judge whether different entities such as AI providers, car manufacturers, and human drivers are at fault through individual infringement cases. The issue of tort liability of AI systems is an extremely complex issue that requires another article to discuss. This article only points out that simply imposing strict product liability on the AI system itself does not conform to the characteristics of AI and the basic principles of product liability law.
4. Legislation of Artificial Intelligence in China from a Global Perspective
From the perspective of comparison and global competition, we can finally analyze and look forward to my country's artificial intelligence legislation. Overall, the current status of my country's artificial intelligence legislation reflects my country's unique national conditions and is consistent with the basic principles of artificial intelligence legislation. Looking to the future, my country's artificial intelligence legislation should adhere to the current legislative approach for a period of time, not rush to legislate quickly, and consider formulating an artificial intelligence law when the time is ripe in the future. If my country's artificial intelligence law is formulated, it can be carried out from three levels: legislative value, public law regulation, and special private law system.
4.1 An analysis of the current status of artificial intelligence legislation in my country
At present, my country has not yet carried out overall legislation on artificial intelligence similar to that of the European Union, nor has it drafted a draft artificial intelligence law for review at the official level. The Chinese government has included artificial intelligence legislation in the relevant agenda in its legislative planning. For example, the "New Generation Artificial Intelligence Development Plan" issued in 2017 proposed to achieve "the initial establishment of artificial intelligence ethical norms and policies and regulations in some fields" by 2020, "the initial establishment of artificial intelligence laws, regulations, ethics and policies" by 2025, and "the establishment of a more complete artificial intelligence laws, regulations, ethics and policies" by 2030. However, in the actual legislative process, my country still took a cautious stance.
Among the existing legislation, the most influential one is the Interim Measures for the Administration of Generative Artificial Intelligence Services (hereinafter referred to as the "Interim Measures") issued by the Cyberspace Administration of China and multiple departments. But overall, this departmental regulation still focuses on the specific scenario of generative artificial intelligence, and mainly applies existing legislation to the field of generative artificial intelligence. In terms of the basis of superior laws, the Interim Measures take my country's Cybersecurity Law, Data Security Law, Personal Information Protection Law, Science and Technology Progress Law and other laws as the legislative basis; in terms of the field of supervision, the Interim Measures focus on illegal information, algorithmic discrimination, intellectual property rights, personality rights and personal information protection, protection of minors and other issues, and stipulate that if the relevant laws or relevant rights are violated in the field of artificial intelligence, the responsibility cannot be pardoned. In this regard, although the Interim Measures take generative artificial intelligence as the object of supervision, it does not create new legal obligations for generative artificial intelligence. Even if this Interim Measures is not formulated, the relevant subjects still need to bear the corresponding legal obligations in the process of developing and using generative artificial intelligence. The significance of the Interim Measures lies in the more focused supervision of generative artificial intelligence, and it clearly sends a signal that generative artificial intelligence is "not a lawless place".
In addition, my country has formulated scenario-based rules for the application of artificial intelligence in different fields and departments. For example, in the judicial field, the Supreme People's Court issued the "Opinions on Standardizing and Strengthening the Judicial Application of Artificial Intelligence", which highlighted the principle of "assisted trial" and emphasized that "no matter what level of technology development, artificial intelligence shall not replace judges in adjudication, and the results of artificial intelligence assistance can only be used as a reference for trial work or trial supervision and management, to ensure that judicial judgments are always made by judges, the power of adjudication is always exercised by the trial organization, and the judicial responsibility is ultimately borne by the judge". In the field of labor protection, the State Administration for Market Regulation, the Ministry of Human Resources and Social Security and other ministries and commissions proposed in the "Guiding Opinions on Implementing the Responsibilities of Online Catering Platforms and Effectively Protecting the Rights and Interests of Takeaway Delivery Drivers" that the assessment of takeaway riders should replace the "strictest algorithm" with "algorithm selection" and other methods, reasonably determine the assessment factors such as order quantity, punctuality rate, and online rate, and appropriately relax the delivery time limit. In the field of information services and public opinion dissemination, the Cyberspace Administration of China and other departments have issued the "Regulations on the Management of Deep Synthesis of Internet Information Services", which stipulates that deep synthesis service providers need to implement the main responsibility for information security, regulate the development of related technologies, and conduct ex ante risk supervision such as "registration and change, cancellation of registration procedures" and "conducting security assessments" for deep synthesis technologies that "have public opinion attributes or social mobilization capabilities".
The legislative approach taken by my country at this stage can better deal with the problems brought about by artificial intelligence legislation. Therefore, my country should not rush to formulate a unified artificial intelligence law at present. Admittedly, the rapid formulation of an artificial intelligence law can bring some benefits, such as playing the express function of the law, attracting attention in the competition of international institutional rules, and even solving some institutional constraints in the current development of artificial intelligence and promoting the development of artificial intelligence. However, past experience and my country's legislative practice have shown that once comprehensive legislation is carried out, the relevant legislation is likely to become regulatory legislation. From the perspective of legislative timing, this means that the legislation of the artificial intelligence law cannot be rushed. It is the best choice for my country at present and even in the future to regulate certain specific risks of artificial intelligence through small-scale legislation and provide development and exploration space for artificial intelligence.
4.2 The design of my country's future artificial intelligence law
When the time is ripe, my country's AI law can be designed in three aspects: legislative value, public law regulation, and private law liability:
4.2.1 Legislative value
In terms of legislative value, my country's AI legislation should focus on the domestic and international coordination of AI, the coordination of hard power and soft power, and put forward a Chinese solution that is in line with the development of AI in my country and can influence or even lead the global AI legislation. As mentioned above, AI legislation in all countries faces a complex dialectical relationship: the dialectic of development and security, the dialectic of the special national conditions of each country and international influence. my country's AI legislation also faces such problems, and there is no simple panacea to solve such problems. The only solution is to deeply analyze the development status and value system of AI in my country, and the development pattern of AI outside the region and the competition of the global value system. On this basis, through equal cooperation and multilateralism, my country's AI legislation plan will be promoted to the international stage, and the "greatest common divisor" of global AI legislation will be shaped.
Specifically, my country's AI law should oppose the hegemony of the United States and the superiority of the European Union at the value level, but recognize the development of the United States and the protection of vulnerable groups in the European Union. As pointed out above, the AI legislation at the federal level in the United States focuses on export controls against China and other countries. This hegemony and Cold War mentality not only harms my country, but may not be beneficial to the United States itself. However, the development stance reflected in the US AI legislation is similar to my country's value orientation. The EU's AI legislation reflects its consistent superiority stance, which is not in line with the multipolarization trend in today's world, nor is it in line with the value stance of equality and multilateralism in my country's value system. However, the EU's stance on the protection of vulnerable groups in legislation overlaps with my country's values. At the value level, my country's future AI legislation should reflect the country's support for new and advanced productivity, and emphasize the potential of AI development to benefit the people; at the same time, it should also reflect the concepts of fairness, equality and protection of vulnerable groups, and go beyond the US and Europe's values of equating anti-discrimination with fairness and equality.
4.2.2 Public law regulation
At the level of public law regulation, my country's AI legislation should distinguish between national security, major public security, general public security and non-public security issues. First of all, for national security, AI legislation should not make too many provisions. The reason is not that national security is not important, but because my country's National Security Law and other laws have already made top-level designs for national security issues, and national security relies more on specific issues for law enforcement, which is difficult to determine through specific legal rules. For my country's AI industry, which is in urgent need of development, if there are too many legal rules on national security in the AI law, it will send too many signals of restricting development to the domestic and international communities, which is not conducive to industrial development and international cooperation, nor to maintaining national security through the development of AI.
Secondly, for public security, it is also necessary to distinguish between major public security and general public security. For the former, for example, for the major risks that AI may cause in fields such as nuclear energy technology, my country's AI law can unify risk prevention and establish a risk regulation model with strong supervision. Strong supervision in these AI fields is conducive to boosting and protecting the public's confidence in the development of AI and establishing my country's international image of developing reliable and safe AI. As for general public safety, such as autonomous driving, unmanned small aircraft, medical devices, and generative artificial intelligence, my country's artificial intelligence laws should adopt a scenario-based regulatory approach, focus on industry and enterprise self-regulation, and authorize different departments and industry regulatory agencies to regulate such risks and establish a departmental regulatory illegal review mechanism to strictly prevent departmental interests, bureaucracy, and formalism in artificial intelligence regulation.
Finally, for non-public safety, my country's artificial intelligence legislation should avoid ex ante regulation and should use tort law and other systems for ex post relief. Admittedly, there is no completely deterministic standard for what is non-public safety. The boundary between public safety and non-public safety depends to a large extent on the safety standards set by regulatory agencies in specific industries and fields. However, it is still important to distinguish non-public safety. At present, many broad-sense illegal acts have significant non-public safety characteristics, such as infringement of intellectual property rights and defamatory speech caused by artificial intelligence. When formulating relevant regulations, some departments put such infringements on a par with illegal acts that cause public safety, and apply ex ante regulation to these acts. Such practices violate the general principles of risk regulation and should be corrected. For non-public safety infringements, they should be regulated through tort law and other ex post remedies, rather than imposing excessively high market access barriers through risk regulation.
In addition, it should be noted that my country's AI legislation should be guided by actual risks for general public safety caused by AI, and should appropriately take into account the perceived risks of the public, and avoid zero-risk risk regulation. When the probability of accidents caused by AI applications in public safety risks is lower than that of humans, the development and application of AI will bring benefits to society. However, since the public often "feels" the relevant risks based on their own risk perception and is easily influenced by public opinion media and hot events, they often have some unrealistic expectations for the safety of AI. In this regard, the risk regulation of AI should be based on "safety" in the sense of science and probability statistics, and should take into account the public's risk perception and "sense of security", and should be based on the risk that the public can accept under the guidance of rational voices such as government, industry, and experts. For example, when self-driving cars reach a safety level twice or higher than that of human-driven cars in terms of probability statistics, if the public can basically accept such risk levels under the influence of rational voices, then self-driving cars should be allowed to enter the sales market. Moreover, when an autonomous vehicle is involved in an accident, the victim can ask the autonomous vehicle company for compensation through insurance or other means, but this cannot be used as a reason to impose administrative penalties on the autonomous vehicle or prohibit market access.
4.2.3 Private law liability
At the level of private law liability, my country's AI legislation should be based on the current tort law and product liability law, and it is not appropriate to create new systems too hastily or radically.
First, for AI systems that constitute products, such as medical devices, cars, toys, etc. with embedded AI, if such products cause infringement incidents, my country's AI law should apply product liability to the terminal manufacturers of traditional products such as medical devices, cars, toys, etc., and contractual liability and fault tort liability to the providers of AI systems. Secondly, for AI systems that assist in decision-making, such as personalized recommendation algorithms in business scenarios and resume screening AI systems in labor recruitment, my country's AI law should apply consumer protection law liability or employer liability to the users or deployers of AI in such infringements. For the providers of such AI systems that assist in decision-making, contractual liability and fault tort liability should also be applied to them. Finally, for the providers of generative AI and large models, my country's AI law should conduct multiple analyses of their tort liability in combination with platform liability, publisher liability and speaker liability. Generative AI and large models have multiple roles as network platforms, publishers, and speakers, and their tort liability should be comprehensively analyzed through current laws.my country's AI law can also create new systems for AI infringements in certain areas. However, before that, the private law system of AI should focus on the creation of rules for individual judicial cases. As mentioned above, the application scenarios of AI are diverse and complex, and the development of AI technology is changing with each passing day. For such infringements, giving full play to the individual case function of tort law can better adapt legal rules to the real world and achieve the predictability of the legal system. As shown by legal research, in complex, diverse and rapidly changing scenarios, bottom-up rule-making based on judicial and case studies is often more predictable than top-down rule-making, because in such scenarios, the parties and adjudicators in individual cases often have more timely and sufficient information, and can make more reasonable judgments on the faults and responsibilities of all parties. When the judgments in such individual cases show certain consensus rules or potential consensus rules, they can be incorporated into the AI law, or temporarily elevated to judicial interpretation to achieve bottom-up judgment guidance.
Conclusion
Unlike the safe harbor rules and personal information protection laws of online platforms, the global AI legislation is far from showing a general consensus. The future direction of my country's legislation with other countries and regions in the world is still an open option. Due to the similarities faced by global AI legislation and the global competition and coordination faced by the field of AI, this article analyzes this issue from the perspective of comparative law, pointing out that the United States and the European Union have adopted legislative models that are superficially similar but actually very different in AI legislation. Among them, the United States has adopted a market-led, departmental, and field-based legislative approach. At the federal level, it focuses on regulating the application of AI by government agencies, and adopts a self-regulatory approach for private enterprises, using existing consumer protection laws and civil rights bills to impose limited supervision on them. The European Union has adopted a legislative approach of unified regulation of AI, unified hierarchical regulation of AI risks, and included AI systems in the scope of product liability law. Faced with AI legislation in the United States, Europe, and other countries in the world, my country urgently needs to find an AI legislative solution that conforms to its own national conditions and has the greatest international common denominator and international leadership.
Specifically, my country should still focus on small-scale legislation on departmental, industry and scenario-based issues, and then formulate a unified and comprehensive AI law when the time is ripe in the future. For my country's future AI law, its legislative value level should reflect the value concept of promoting development, open cooperation, and equal sharing, and oppose the real hegemony of the United States and the value superiority of the European Union. At the level of public law regulation, my country's AI law can focus on risk regulation. However, AI legislation should distinguish between national security, major public safety, general public safety and non-public safety issues. my country's AI law should not stipulate too many national security clauses. Mandatory regulations should be set for major public safety risks. For general public safety, departments and industry authorities can be authorized to legislate, but their powers should be legally controlled, and they should be required to respect industry autonomy regulations. For non-public safety, ex ante regulation should be avoided. For the private law liability system, my country's AI law can elevate special rules on AI infringement to legislation. However, before that, my country should fully accumulate specific cases of AI infringement in the judiciary based on the current tort law and product liability law. In general, my country's AI legislation needs to maintain strategic patience, which not only provides exploration space for the development of AI industry and technology, but also provides academic and empirical support for future AI comprehensive legislation.
The original article was published in the 4th issue of Comparative Law Research in 2024. Thanks to the WeChat public account "Comparative Law Research" for authorization to reprint.