[author]Lin Huanmin
[content]
The Special Tort Liability Rules for Damage Caused by Artificial Intelligence
Lin Huanmin
Associate Professor, Koguan School of Law, Shanghai Jiao Tong University
Abstract: The determination and assumption of liability for artificial intelligence (AI) infringement are among the key and challenging issues in AI legislation. The product liability approach fails to effectively address the three major challenges posed by AI-related infringements: proof of fault and causation, the definition of new types of damages, and the identification of liable parties in torts arising from AI applications. AI infringement liability legislation should be based on fault liability while incorporating specialized supplementary provisions to alleviate the evidentiary burden on the infringed party in establishing and assuming liability. The infringed party can only overcome the evidentiary barriers caused by information asymmetry by accessing AI development records, activity logs, and other relevant documents. Legislation should establish rules for the disclosure of evidence and impose information disclosure obligations of AI-related entities under certain conditions, providing a substantive legal basis for courts to issue orders for documentary production. Regarding the redress of virtual damages in the AI era, rather than expanding the scope of material damages, it would be preferable to abandon the strict requirement of “seriousness” for mental damages and adopt a "significance" standard instead. To reduce the burden of proving causation for consumers of AI products, the legislation should establish a rule for presuming causation under specific conditions. When the damage is definitively caused by a breach of obligation, but the exact source of liability is difficult to ascertain, all members of the same commercial and technological unit should bear joint and several liability for the damages.
Keywords: Artificial intelligence tort; product liability; fault liability; evidence disclosure; virtual damage
1. Introduction
With the leapfrog development and industrial application of artificial intelligence technology, incidents of AI infringement are also on the rise. Responsible AI development requires robust and forward-looking governance. Without a clear and distinct framework for AI infringement liability, there will be no true AI safety. Artificial intelligence possesses characteristics such as autonomy, network-based, unpredictability, and data dependency. Once an AI tort occurs, it is difficult for the infringed party to effectively prove elements such as fault, causation, and damages, and it is even difficult to identify the actual liable party. AI application infringements present strong uncertainty in terms of both attribution of liability and causation, which has become one of the three major barriers hindering the application of AI. The rules in the Civil Code for traffic accident tort liability, medical tort liability, etc., can, after improvements, become the normative basis for regulating specific AI applications such as autonomous vehicles and medical AI. If AI is applied to high-risk operations such as nuclear facilities and aircraft, the provisions for high-risk liability under Article 1236 of the Civil Code are of course applicable. However, liability rules for specific scenarios are not able to generally regulate different types of AI infringing activities (such as digital assistants, facial recognition, smart home control systems, generative AI infringements, etc.). How to design a comprehensive set of AI infringement liability rules is truly a major issue in AI governance.
One approach is to expand the scope of product liability, requiring AI software providers and sellers to bear strict liability for damages caused by AI system defects. The EU's revised and adopted "Defective Products Liability Directive" [Directive (EU) 2024/2853] in October 2024, effective in December, explicitly lists software as a product in Article 4(1), and recital 13 further clarifies that "software is a product for the purposes of the application of no-fault liability, irrespective of its mode of supply or use, and irrespective of whether the software is stored on a device or accessed through communication networks or cloud technologies, or supplied through a software-as-a-service model." Accordingly, AI systems become subject to the EU's product liability framework. However, the product liability approach may not effectively resolve the legal application challenges arising from AI applications.
2. The Insufficiency of the AI Product Liability Approach
Product liability is a form of strict liability, which seems to offer better protection for the legitimate rights and interests of parties in relation to AI compared to general fault liability. The EU's revision to expand the scope of application of product liability is a proactive response to the challenges of tort law in the intelligent era. However, the product liability approach may not effectively resolve the three major challenges posed by AI applications: proving fault and causation, recognizing new types of damages, and identifying the liable parties.
2.1 Difficulty in Proving Fault and Causation
Due to the autonomy and network-based nature of AI systems, proving fault and causation in AI torts is extremely difficult. ① AI systems possess a certain degree of autonomy. When an AI tort occurs, it is difficult to determine whether the system developers, providers, users, and other related parties are at fault. In the past, when a computer system did not operate as expected, the system was considered to be in error. AI systems, represented by deep learning algorithms, have the ability to self-learn and self-evolve, and can make decisions or take actions based on rules that are not entirely pre-determined. When an AI system produces an unexpected result, it cannot be automatically assumed that the system is in error. Taking generative AI as an example, generative AI can fully mine new linguistic patterns from massive amounts of data to conduct logical "fabrications," which is the so-called "hallucination." When generative AI infringes on another's reputation or personal information, we cannot determine whether the damage was caused by a breach of duty by the relevant AI entity or is a result of the AI's own growth. ② AI activities are network-based, making it difficult to determine the causal relationship between the action and the damage. The output of AI is the result of the combined effect of multiple factors such as algorithms, computing power, data, and human-computer interaction. Once damage occurs, it is often difficult for us to identify which factor decisively caused the damage. Taking a smart home as an example, a malfunction of a smart device could be caused by an algorithm error, flawed training data, a fault in the real-time data sensing equipment, inefficient remote data processing and transmission, a third-party cyberattack, or a combination of these factors, making it difficult to identify which one or several factors played a substantial role. If the cause of the damage cannot be determined, it will be difficult for us to judge whether a causal relationship exists between the action and the damage.
The product liability approach does not effectively alleviate the plaintiff's burden of proving fault and causation. Under the product liability approach, the infringed party still needs to prove the existence of a causal relationship between the defendant's action and their own damage (Article 1202 of the Civil Code). In comparison, product liability replaces "fault" with "defect," which may seem to lighten the infringed party's burden of proving fault. The EU's "Defective Products Liability Directive" preamble, paragraph 2, points out that the reason for expanding the scope of products to include software is to use strict liability to solve the problem of harm sharing brought about by the development of modern technology. However, determining a defect also has shades of fault. The main standards for determining a defect are currently the "consumer expectation" test and the "risk-utility" test. ① Under the consumer expectation test, the producer is only responsible for the foreseeable danger. The newly revised EU "Defective Products Liability Directive" Article 7(2)(b) and (d) both emphasize foreseeability in assessing the significance of a defect. But foreseeability is the core standard for judging the duty of care. Under the product liability approach, the infringed party still needs to prove the existence of a reasonable expectation. However, it is difficult for civil subjects to fully understand the operating mechanism of AI systems, and they may not necessarily form a reasonable expectation. For example, whether consumers should expect AI to make mistakes like ordinary people, or whether AI should perform better than humans, there is no definite answer. ② The risk-utility test also has certain similarity with a fault judgment. The risk-utility test is a quantitative analysis method that relies heavily on the accumulation of high-quality data and effective evaluation models. Before AI technology is put into application, there are often no available data and models to assess risks and benefits. Against this backdrop, one might resort to the alternative design standard proposed in the U.S. Restatement (Second) of Torts § 2(b) to determine if a defect exists: if a safer and reasonable alternative design exists on the market, the product can be deemed defective. But if this is the case, it will lead to a winner-takes-all situation, which is not conducive to the development of small and medium-sized AI enterprises. Article 7(3) of the EU "Defective Products Liability Directive" explicitly states that a specific AI product cannot be considered defective merely because a better product exists on the market. Whether a reasonable alternative design exists should be judged comprehensively based on various factors such as the possibility of eliminating the product's danger without harming its utility, the user's ability to control risk, the possibility of controlling risk through warnings, and the possibility of the manufacturer spreading losses. These factors also constitute the reference framework for jurists to judge whether an actor should bear the duty of care. It can be seen that the judgment of a product should prove that the producer violated a reasonable duty of care, which is the same as determining whether an actor is at fault. Since the determination of a defect also involves judging the duty of care, applying product liability to AI infringing activities cannot effectively reduce the evidentiary burden on the infringed party.
2.2 Barriers to Accommodating New Types of Damages
How to remedy new types of damages is a major challenge for tort law in the AI era. The new types of damages caused by AI torts mainly include two categories: one is discrimination, manipulation, etc., arising from data processing; the other is data damage or pollution caused by AI activities. ① Harms such as discrimination, manipulation, and control are difficult to recognize as damages under tort law. Damages are divided into property damage and non-property damage. The former refers to the loss of transferable and substitutable material legal interests, including damage to property and expenses incurred to restore health; the latter refers to the harm suffered to personal life values that cannot be measured in money, such as damage to reputation and mental suffering. It is self-evident that AI activities infringe on the rights to life, health, and tangible property. Improper processing of personal information by an AI system may cause civil subjects to encounter discrimination, manipulation, and control. However, the infringed party usually has not suffered a real reduction in property and is difficult to be considered as having suffered material damage. Civil subjects can certainly claim non-pecuniary damage, but if it does not cause "serious" mental harm, it is difficult to get the court's support (Article 1183 of the Civil Code). ② When AI activities directly cause data deletion, damage, or pollution, it is difficult for the infringed party to prove the existence of damage. Given that there is currently no personal information payment model on the market, it is not easy for individuals to prove that the damaged personal information has a real and certain property value. In judicial practice, some judgments have refused to support the damage compensation claims of personal information subjects on the grounds that damage does not exist; even judgments that support damage compensation tend to avoid the question of whether damage exists, and only determine the amount of compensation on the grounds that "personal information has property value." One approach is to use the protection of storage media to provide relief for data rights holders.
However, when the owner of the medium and the data rights holder are not the same entity, it is difficult to meet the demands of the data rights holder: the data rights holder cannot directly file a lawsuit and obtain corresponding damages. If the infringed party claims compensation for mental distress, they will also face the dilemma of needing to prove the existence of "serious mental distress." Under the existing tort law framework, an AI system deleting family photos, travel videos, chat records, etc., will cause distress to the civil subject, but may not necessarily cause serious mental distress.
The product liability approach for AI torts has not cleared the way for remedying new types of damages. ① Product liability is not designed for harms such as discrimination and manipulation; its focus is on personal injury and property safety. Recital 24(1) of the EU "Defective Products Liability Directive" explicitly states that purely economic loss, infringement of privacy, or discrimination are not applicable to this directive. Although the product liability rules in China's "Civil Code" do not restrict the scope of damages, they also do not provide a new normative basis for non-pecuniary damages caused by AI torts. In this context, even if the infringed party claims product liability, if they do not meet the strict requirements for compensation for mental distress stipulated in Article 1183 of the "Civil Code", they still cannot obtain effective relief. ② Whether product liability can provide relief for data loss is still controversial. The EU "Defective Products Liability Directive" stipulates in Article 6(1)(c) that "destruction or corruption of data not used for professional purposes" constitutes damage. The EU's expansion of the scope of relief under product liability to include data has not received unanimous approval. And regardless of whether the EU's innovation is reasonable, in the Chinese legal system, whether data can become the object of property rights does not have a clear answer. During the drafting of the "Civil Code," there was a proposal to treat data assets and network virtual property as a new type of intellectual property object, but it was immediately met with fierce opposition and did not materialize. The "Civil Code" later made an open-ended provision in Article 127, namely "where the law has provisions on the protection of data and network virtual property, those provisions shall be followed," thereby reserving space for the formulation of special rules. China currently does not have special data legislation. When the current law does not provide clear guidance, even if product liability is applied to AI torts, it is still difficult to escape the dilemma of determining data damage.
2.3 Difficulty in Determining the Liable Party
Due to the complexity of AI activities, the liable party is often hidden within the intelligent ecosystem, making the assumption of liability for AI torts a difficult problem for legal application. The AI industry chain is long, and a product or service involves numerous entities, including designers, providers, users, third-party service providers, sensor manufacturers, and other multiple entities. A flaw anywhere in the AI chain can lead to damage. For example, system updates are crucial for maintaining system stability and patching system vulnerabilities, but if an update is improper, it can reduce system performance and cause a failure (bug). Also, current AI activities exhibit strong human-computer interaction. AI systems do not operate entirely automatically based on hardware and software but achieve their intended goals through interaction between the system and human operators. Human-computer interaction, on the one hand, prevents system runaway through human participation (human in the loop), and on the other hand, it also increases the risk of AI misuse. Medical AI may be subject to parameter modifications by users or trained with medical data from specific groups, making it more prone to false positives rather than false negatives. However, when a medical accident occurs, the infringed party cannot determine whether the problem arose from the AI system's own design or from subsequent user actions. Because the real liable party cannot be determined, the infringed party's claim for damages is difficult to be supported by the legal order.
The scope of liable parties under the "Civil Code's" product liability is relatively narrow. If the scope of liable parties is expanded, it would be equivalent to having multiple entities bear strict liability.
This would be a joint and several tort liability. The "Civil Code" limits the liable parties in product liability to producers and sellers (Article 1203). When an AI tort occurs, it seems that the problem of damage sharing can be solved by allowing the infringed party to directly hold the producer or seller accountable. Traditional products are often the output of a linear production chain, where secondary suppliers of product components sell them to the producer, who then manufactures and introduces the product to the market. Because consumers cannot directly contact the component producers, they should seek damages from the end-producer who "bundles" the components into a product. This is the rationale for limiting the liable parties in product liability. In contrast, AI products have non-linear characteristics. After purchasing hardware, consumers can purchase smart software on their own and accept backend support from third parties. In the digital world, products are "unbundled," which makes the value chain non-linear and presents a network-like structure. The change in the state of the product, on the one hand, means that the end-producer cannot fully control the production and use of the product in the same way as a traditional product producer, making it not necessarily appropriate for the producer to bear strict liability for the damage; on the other hand, it also means that consumers can directly choose software providers and third-party service providers and decide whether to upgrade or update, etc. Consumers should also bear the adverse consequences of their own decisions. The EU's "Defective Products Liability Directive" still insists on strict liability, but it cannot ignore the network characteristics of smart products, and ultimately has to adopt a strategy of expanding the scope of liable parties. The EU expands the scope of liable parties in product liability to include component producers (Article 8(1)), where entities providing related services such as navigation and health monitoring are also considered component producers (Recital 17); since users can directly choose service providers and in some cases change the original design and function of the system, the "Defective Products Liability Directive" also has to point out that natural or legal persons who make substantial changes to the product should also be regarded as producers (Article 8(2)). Thus, the liable parties in EU product liability include product producers, component producers (software producers, related service providers), and users who make substantial modifications. According to Article 12 of this directive, multiple producers shall be jointly and severally liable for the damage. Accordingly, the new EU product liability essentially resolves the problem of liability sharing by defining all relevant parties as producers and having them bear joint and several liability, thereby nullifying the original institutional design of limiting the liable parties and having the end-producer or seller bear strict liability. It is not necessarily reasonable to simply list possible liable parties in a networked product and require them to bear joint and several liability for the damage.
In summary, the product liability approach cannot effectively resolve the three major challenges of proving fault and causation, accommodating new types of damages, and determining the liable parties in torts caused by AI. China should not be influenced by the Brussels effect and follow the EU's lead by reforming product liability to respond to the challenge. Instead, it should return to the problem itself and design targeted, barrier-breaking rules at the levels of liability establishment and assumption to reduce the evidentiary burden on the infringed party and provide relief for their losses.
3. Liability Establishment: Rules to Facilitate Proof in AI Torts
AI activities possess characteristics such as autonomy, being network-based, and data dependency, which give rise to many evidentiary challenges. Instead of amending the "Product Quality Law" and using product liability to regulate AI activities, it is better to directly "prescribe the right remedy" and configure rules for information disclosure and the determination of damages and causation in the "Artificial Intelligence Law" to resolve the evidentiary difficulties in establishing tort liability.
3.1 The Principle of Fault-Based Liability for AI Torts
The following will first discuss the principle of attribution for AI torts, and on this basis, analyze how to reduce the difficulty of proving fault for the infringed party.
3.1.1 The Principle of Fault-Based Liability for AI Torts
People easily and intuitively accept the view that AI applications bring new risks, and thus the legal order should apply strict liability to AI activities that cause risks. The risks caused by AI activities are not necessarily dangers in the civil law sense. Danger in civil law, typically as stipulated in Article 179 of the "Civil Code" regarding "eliminating danger," targets specific factual dangers, and the danger is real, existing, and imminent, posing a real threat to the personal and property safety of others. The product liability, high-risk liability, etc., in the "Civil Code" all target specific dangers rather than abstract risks. In contrast, when we discuss the risks generated by new technologies, we are often referring to an abstract event in a collective sense, rather than an individual, specific danger. We cannot simply assert that AI activities should be subject to strict liability just because AI applications bring new risks. Even for the research and application of high-risk AI systems, strict liability can only be applied on the premise that specific, real-world high-risk activities can be clearly defined. If we abstractly hold that "strict liability should be applied to high-risk AI activities," it will be unenforceable in practice due to the inability to define the type and scope of high-risk AI activities.
Applying strict liability to AI torts is not necessarily conducive to reducing the frequency of accidents, and may on the contrary increase the overall cost of accidents. An important reason for advocating the application of strict liability is that it can provide a predictable external environment, incentivizing actors to take reasonable preventive measures to minimize social costs (accident costs, trust environment costs, prevention costs, etc.). Although strict liability can internalize negative externalities, positive externalities will not fully flow back to the participants. Even if a civil subject who bears strict liability actively improves the system's safety level, if other related entities in the AI activity chain act negligently, the danger cannot be effectively controlled. AI activities involve multiple elements such as algorithms, computing power, and data. The controller of an AI system cannot fully examine whether the training data and sensor data obtained from third parties are flawed; when there is algorithmic interaction, no one can guarantee the accuracy of the algorithm design. If damage is unavoidable, a rational liable party will be negligent in reducing the probability of accidents through technical and management means, and will instead tend to disperse accident costs through price mechanisms, ultimately leading to an increase in accident costs and a decline in overall social welfare.
In comparison, fault liability can comprehensively evaluate the actions of civil subjects and incentivize relevant parties to take measures to prevent the occurrence of damage. Fault liability can meticulously assess the actions of all parties and screen out the subjects who should be held accountable. Taking large language models as an example, ① at the design level, an AI system is composed of a basic large language model and a specific model. The large model trained with massive amounts of data provides the underlying logical support, while the specific model is adapted to numerous specific industries and scenarios through specialized optimization training for a "specific field of expertise." In this collaborative application relationship, the risks caused by the system are superimposed. Only by applying fault liability, which makes each civil subject responsible for their own negligent acts, can the legal liability of different upstream and downstream developers be appropriately pursued. ② At the application level, users of AI large models can be further subdivided into front-end users and back-end operators. The former has the right to decide on and use the AI system, while the latter continuously defines the technical characteristics of the AI and provides necessary services. Although the front-end user has the status of an owner or custodian, the operation of the AI system is inseparable from the services of the back-end operator, and the latter benefits from continuously providing services. Under the fault liability model, jurists will search for the responsible person throughout the entire application chain and demand that the relevant entity bear the corresponding damage compensation liability based on their own degree of fault. Legal-economic studies also point out that when complementary care among related parties in a tort dispute is more common than substitute care, that is, when the damage is not caused by a single act, strict liability tends to lower the entire society's standard of care for AI applications; conversely, fault liability will incentivize every related party to make "complementary efforts" to prevent the occurrence of damage.
Presumed fault liability may put small and medium-sized AI enterprises in a liability predicament, which is not an ideal legislative choice. As a type of fault liability, presumed fault liability not only helps to reduce the plaintiff's evidentiary burden but also does not impose excessive blame on AI enterprises, thus appearing to be a compromise solution for regulating AI activities. The personal information infringement rule stipulated in Article 69(1) of China's "Personal Information Protection Law" also adopts presumed fault liability. If legislation applies presumed fault liability to AI torts, it can also maintain consistency with the provisions of the "Personal Information Protection Law." However, the AI industry chain is long, and a single product or service involves numerous entities. If all related entities are required to prove their own lack of fault, it will generate huge transaction costs. Given the characteristics of AI systems such as autonomy and opacity, it is extremely difficult for participants in AI activities to prove their own lack of fault, which will produce a chilling effect and hinder small and medium-sized enterprises from entering the AI industry. This concern is not unfounded. One of the important reasons why personal data trading in China is not thriving is the presumed fault liability. When a data transaction infringes on personal information rights, if the data recipient cannot prove that it has fully fulfilled its personal information protection obligations, it will bear corresponding legal liability. This makes data recipients wary of conducting personal data transactions. If presumed fault liability is applied to AI torts, small and medium-sized enterprises may be daunted by it. Presumed fault liability will become a "straitjacket" for AI activities. For the research and application of new technologies, the appropriate approach is to adopt a "responsive regulation" model, that is, to strengthen communication with the industry through fault liability rules, and in the process of judging whether there is fault, to form a common understanding of behavior, and gradually form new standards and norms. In the early stages of AI development, it is not advisable to rashly adopt presumed fault liability; it is better to wait until the industry has developed and experience has accumulated before considering its implementation.
The insurance system is often an important reason for applying strict liability to tortious acts, but in the early stages of AI development, the insurance mechanism cannot be fully utilized. Whether it is strict liability or presumed fault liability, if it can be matched with an effective insurance system to disperse costs, it also has the rationality of being adopted. Unfortunately, there is currently a lack of effective data on the market to establish an AI tort insurance model. AI is an emerging technology, and related applications have not yet been fully rolled out, and cases of damage compensation are relatively rare. Whether promoting voluntary insurance (market-driven) or compulsory insurance (state-regulated), due to the lack of statistical data on accidents and damages, it is impossible to conduct effective marginal risk calculations. Considering the high compensation amounts for personal and property damage, insurance companies will be more inclined to charge higher insurance premiums for AI activities. The EU Commission's assessment report pointed out that the insurance premiums payable by AI enterprises under a strict liability model will increase by 35% annually. Large enterprises may be able to afford higher insurance costs, but potential entrepreneurs will be hesitant about whether to engage in AI research and application due to high premiums. From this perspective, the insurance mechanism not only cannot effectively disperse risks, but may on the contrary become a systemic barrier hindering small and medium-sized enterprises from entering the AI industry.
In summary, AI torts should still adopt the principle of fault-based attribution, and it is not advisable to rashly stipulate presumed fault liability. As for the problem of proving fault caused by the characteristics of AI, the legal order should design special information disclosure rules to resolve it.
3.1.2 Evidence Disclosure Rules to Facilitate the Victim's Burden of Proof
Under the fault liability model, the plaintiff should, in principle, bear the burden of proving that the defendant was at fault. In AI tort disputes, it is not easy for the plaintiff to prove that the defendant was at fault. AI activity records are an important medium for helping to ascertain the truth. However, the defendant often refuses requests from the opposing party to access information about the AI system on the grounds that it involves business secrets.
To solve the problem of evidence asymmetry, judicial organs have created the system of an order to produce documentary evidence, but its operation is not smooth. In modern litigation, key evidence is often held by the defendant, and the phenomenon of evidence asymmetry is obvious. The plaintiff, because they do not possess key evidence, finds it difficult to file a lawsuit, and even if they do, they often face defeat. The system of an order to produce documentary evidence came into being. The "Interpretation of the Supreme People's Court on the Application of the Civil Procedure Law of the People's Republic of China" (Fa Shi No. 5) issued in 2015 formally stipulated the system of an order to produce documents in Article 112; the "Several Provisions of the Supreme People's Court on Evidence in Civil Procedure" (Fa Shi No. 19) amended in 2019 further specified and detailed the system of an order to produce documentary evidence in Articles 45 to 48. When evidence is asymmetrically held, the court, upon application by a party, issues a ruling to the opposing party or a third party controlling the documentary evidence to submit it; if the controller of the documentary evidence refuses to comply, they will bear adverse legal consequences. If the order to produce documentary evidence can be applied in AI tort cases, the court will, upon the plaintiff's application, rule on whether to require the defendant to disclose technical documents, system logs, and other information, helping to ascertain the facts and reducing the plaintiff's evidentiary burden. However, in practice, judges rarely issue orders to produce documentary evidence, and "the judicial application rate and approval rate of orders to produce documentary evidence are low." This phenomenon also has a certain rationality, as the system of an order to produce documentary evidence is a modification of the principle of "he who asserts must prove." Chinese scholars also emphasize that general case explanations are not suitable for China's litigation practice, and the obligation to produce documentary evidence should be limited to situations stipulated by law. Without explicit legal authorization, judicial organs are often reluctant to issue orders to produce documentary evidence.
China's future AI law should stipulate evidence disclosure rules under specific conditions to provide a clear normative basis for courts to issue orders to produce documentary evidence. Under China's current legal system, even if there is a contractual relationship between the two parties, the defendant does not have the obligation to explain the cause of the damage. Consumers have the right to know relevant information about goods and services (Article 8(2) of the "Consumer Rights Protection Law"), and operators of remote sales or financial services should also provide additional information such as safety precautions, risk warnings, after-sales services, and civil liability (Article 28). However, consumers do not have the right to know the daily operation and running status of the actor. Only by creating evidence disclosure rules through the "Artificial Intelligence Law" and providing a substantive legal basis for judicial organs to issue orders to produce documentary evidence can the problem of evidence asymmetry in AI torts be effectively solved. Article 9 of the EU's "Defective Products Liability Directive" specifically provides for evidence disclosure rules, allowing the infringed party to submit an application and for the court to decide whether to approve the disclosure of evidence, thereby promoting the bottom-up good governance of AI. China's future AI legislation can draw on the legislative experience of the EU, allowing the person related to the AI activity to apply to the court under specific conditions, and for the court to issue an order to produce documentary evidence, requiring the relevant entity to provide activity records; if the obligated person refuses to provide relevant information, it is presumed that they are at fault.
The evidence disclosure rule is, after all, a deviation from the classic theory of the burden of proof, and can only be applied when specific conditions are met. First, what the plaintiff applies to disclose should be information that the applicant is clearly required by law to record and retain. The participants in AI activities do not bear a general duty of explanation. The infringed party cannot vaguely demand that the AI system provider or user disclose all information. The infringed party should only request the disclosure of information that the applicant is obliged to record, in order to verify whether the defendant has fulfilled legal obligations such as data management, human supervision, and compliance assessment. Second, the infringed party should present reasonable facts and evidence to prove the existence of "reasonable suspicion." The disclosure of evidence is not the right of the infringed party; whether the relevant AI entity has an obligation to provide evidence should be judged by the court. The infringed party should prove the existence of a reasonable suspicion but cannot further ascertain the truth due to the asymmetry of evidence. The court, after receiving and reviewing the application of the infringed party, will rule on whether to issue an order to produce documentary evidence. Only by exercising necessary control over the evidence disclosure rule can we prevent the creation of abusive litigation that seriously interferes with the normal operation of AI enterprises. Finally, only when the plaintiff has exhausted all appropriate efforts and still cannot obtain sufficient evidence can they request the court to issue an order to produce documentary evidence. The evidence disclosure rule aims to balance the litigation power between the two parties. If the plaintiff can easily obtain relevant evidence (such as when the defendant uses open-source algorithms), the defendant's burden should not be increased.
3.2 Expanding the Scope of Compensation for Mental Harm
How to provide relief for new types of harm is another difficult point in handling AI tort cases. On the one hand, China's future AI law should not ignore the growing demand for compensation for damages in virtual space; on the other hand, the legislation should also avoid being overly harsh, causing small and medium-sized AI enterprises to be frequently blamed and their freedom of action to be harmed.
3.2.1 The Dilemma of Fitting into the Material Damage System
In judging material damage, the difference theory has always been the prevailing view, with the objective damage theory and the normative damage theory serving as necessary supplements to the difference theory. Damage under the difference theory refers to the difference between the actual state of property and the state of property if the damaging event had not occurred; the objective damage theory and the normative damage theory, while acknowledging the basic connotation of the difference theory, respectively emphasize the damage to individual, specific things and the importance of the normative purpose. Regardless of which theory of damage compensation is adopted, it is difficult to prove the existence of material damage when data is misused or destroyed.
When data is improperly processed, even though the civil subject may suffer risks such as discrimination and manipulation, since the property status of the civil subject has not changed, it is difficult to be recognized as having material damage. To solve this problem, some scholars have proposed the innovative concept of "risk as damage." However, other scholars have pointed out that risks that lack certainty cannot be classified as damage. Risk is different from danger; the former is abstract and uncertain, while the latter is more specific and urgent. If no actual damage has been suffered or no expenses have been incurred to prevent the occurrence of damage, there is only risk, not danger. It is difficult for judicial organs to recognize the existence of "actual damage," and at most they can only recognize the existence of a "guess about future damage." According to the difference theory, there is no difference in value between the current state and the should-be state at this time; whether according to the objective damage theory or the normative damage theory, because the uncertain risk does not devalue the actual use state of the property, and there is no third party intervening to control the damage, the civil subject has not suffered objective damage or normative damage. Therefore, it is difficult for a civil subject who has only suffered risks such as discrimination and manipulation to prove the existence of material damage worthy of legal remedy. In fact, the legal order often uses ex-ante control measures to regulate risk activities. Risk is characterized by de-individualization, that is, it is regulated from the perspective of overall controllability rather than from the perspective of individual damage remedy. For product safety risks, China has issued a special "Product Quality Law" for ex-ante control, and only when actual damage occurs does the product liability rule in the "Civil Code" apply. Similarly, for the risks of discrimination and manipulation caused by AI activities, a more convenient path is to use regulatory documents issued by regulatory authorities, such as the "Interim Measures for the Management of Generative Artificial Intelligence Services" and the "Provisions on the Management of Algorithm Recommendations in Internet Information Services," for regulation.
The deletion, pollution, or destruction of data does not signify the existence of material damage. First, the destruction of data does not necessarily mean the existence of material damage. The value of data is reflected in its use, and it does not have an objective market price. Traditional valuation methods such as the income approach, cost approach, market comparison approach, and multi-dimensional quantitative assessment have significant limitations in determining the price of data. Because it is uncertain whether the data has value, it is difficult for the infringed party to claim that there is a difference between the existing state and the should-be state, and it is also impossible to prove that the use value has been damaged (objective damage theory) or that there is damage at the normative level (normative damage theory). Second, personal information rights are carried on the data. The destruction of data constitutes an infringement of civil rights and interests, but it cannot be simultaneously determined that damage exists. Article 1165(1) of the "Civil Code" stipulates: "An actor who, due to fault, infringes upon the civil rights and interests of others and causes damage shall bear tort liability." The "Civil Code" lists "civil rights and interests" and "damage" in parallel. From this, it can be seen that after a civil subject proves that their civil rights and interests have been infringed, they still need to prove that they have suffered damage. When data is destroyed, it cannot be automatically presumed that damage exists. Article 69(1) of the "Personal Information Protection Law" also lists "personal information rights and interests" and "damage" in parallel, indicating that after a personal information subject proves that their personal information rights and interests have been infringed, they still need to prove the existence of damage. Therefore, the damage to data cannot constitute damage itself; the infringed party still needs to prove the existence of material or non-material damage.
3.2.2 From "Serious" to "Significant" for Mental Harm Compensation
Instead of expanding the boundaries of material damage, it is better to abandon the "seriousness" standard for determining mental harm compensation, making it more convenient for the infringed party to file a claim for mental harm compensation. The main reason why scholars have innovated the concept of material damage is that only serious mental harm compensation (such as mental illness) is remedied by tort law. When an infringed party suffers discrimination or manipulation due to data processing, or when their data of obvious significance is destroyed, they often experience emotions such as fear, pain, and anger. However, if it does not reach the level of serious mental harm, it is difficult for the infringed party to file a claim for mental harm compensation. The reason why the legal order strictly limits the application of mental harm compensation is the concern that it will be abused. When an AI activity clearly violates laws such as the "Personal Information Protection Law," requiring the defendant to bear damage compensation liability will not lead to a flood of liability; after all, the defendant's behavior is inherently condemnable. Article 82(1) of the EU's "General Data Protection Regulation" does not limit the application of non-material damage to "serious mental harm" conditions, but points out that "any person who has suffered material or non-material damage as a result of an infringement of this Regulation shall have the right to receive compensation from the controller or processor for the damage suffered." In a new ruling in 2024, the European Court of Justice also pointed out that for a plaintiff to claim compensation for non-material damage, they only need to prove that the defendant conducted data processing activities that the plaintiff explicitly objected to, causing them to lose control over their data, without needing to prove that the damage reached a certain level of severity. When determining whether serious mental harm exists, U.S. courts no longer require physical symptoms; as long as the defendant's conduct is sufficiently extreme, it can be presumed that the plaintiff has suffered serious mental distress. It is somewhat outdated to rigidly limit the application of mental harm compensation to "serious mental harm" conditions. Although Article 1183 of the "Civil Code" stipulates that compensation can only be made for serious mental harm, the legislative interpretation also points out that "for the understanding of 'serious,' the theory of tolerance limit should be adopted, that is, what exceeds the tolerance limit of an ordinary person in society is considered serious." China's AI legislation may as well abandon the "seriousness" requirement for determining mental harm compensation, appropriately expand the scope of mental harm compensation, and eliminate the shortcomings of insufficient remedies in tort law in the new era.
As long as the personal data is of obvious importance to the civil subject and can arouse widespread recognition, it should be presumed that mental harm exists. With the changes in society and values, the expansion of the scope of mental harm compensation has become an important topic in the development of modern civil law. Article 651n(2) of the "German Civil Code" stipulates that if a trip fails, the tourist can demand appropriate monetary compensation for the wasted vacation time. In the case of a failed trip, the consumer may experience dissatisfaction, frustration, and disappointment, but not necessarily serious mental harm. Nevertheless, German jurisprudence holds that granting consumers mental harm compensation will not cause controversy, but rather will easily arouse public sympathy. China has always been cautious about applying mental harm compensation, but has also gradually recognized mental harm compensation for employment discrimination based on gender, disability, or illness. Whether to support mental harm compensation depends not on whether the mental harm is serious, but on whether the mental harm is significant enough to gain the general recognition of society. When a job seeker suffers employment discrimination due to gender, illness, etc., the court's support for mental harm compensation is easily and widely supported. Similarly, when an AI system improperly processes or destroys data, whether the court should support mental harm compensation should also be judged based on the typical meaning of the data. The sentimental value of personal data such as family photos and travel videos to the civil subject is obvious. The misuse or destruction of such data is difficult for the civil subject to tolerate, and the court's recognition of mental harm compensation will mostly not encounter questioning. In contrast, although personal data such as shopping records and browsing records are also protected by laws such as the "Personal Information Protection Law," their value to the civil subject is relatively limited. Unless the civil subject can prove that the above data has a specific meaning, the misuse or destruction of the data should not lead to mental harm compensation. As for the problem of the difficulty in calculating mental harm, it is not unique to data-related torts. The "Interpretation of the Supreme People's Court on Several Issues Concerning the Determination of Liability for Compensation for Mental Damage in Civil Torts" (Fa Shi No. 7), amended in 2020, stipulates six reference factors in Article 5, which can provide necessary guidance.
The controversy lies in whether mental harm compensation should be limited to situations where the actor has intent or gross negligence. One view is that personal data attached to personality rights is similar to specific items with personal significance, and the infringed party should only be able to claim mental harm compensation on the premise that the actor has intent or gross negligence (Article 1183(2) of the "Civil Code"). From a normative purpose perspective, the reason why Article 1183(2) of the "Civil Code" requires the actor to have intent or gross negligence is that "the law cannot generally expect the tortfeasor to recognize that a specific item has personal significance to the infringed party." Personal data is different from specific items; the former's personal significance is obvious, while the latter's personal attribute has a certain degree of concealment. When a tortfeasor uses AI to process personal data, they should be aware that it may infringe on personality rights, and at this time, there is no problem of protecting the reasonable expectations of the actor. Therefore, limiting the application of mental harm compensation to situations where the actor has intent or gross negligence is not consistent with the legislative spirit.
3.3 Presumption of Causation for the Goal of Consumer Protection
How to prove the existence of a causal relationship between the defendant's act and the plaintiff's damage is another difficult point in resolving AI tort disputes. AI activities are network-based, and the acts of multiple entities combine to jointly promote the application of AI. Even if the infringed party points out that a specific act of the tortfeasor may have caused the AI to produce a harmful output, if it cannot reach a high standard of probability, it will be difficult to be considered as having completed the burden of proof.
One approach is to solve the problem of proving causation in AI torts through a complete reversal of the burden of proof. Article 1230 of the "Civil Code" stipulates that the party polluting the environment should prove that there is no causal relationship between their act and the damage. The legislative interpretation points out that environmental pollution damage has characteristics such as long-term latency, persistence, and extensiveness, and it is easy for multiple causes to lead to a single effect. Judging the existence of a causal relationship requires specialized knowledge, so it is necessary to implement a reversal of the burden of proof for causation. AI-induced damage also has characteristics of persistence and extensiveness, and the process of causing harm also has high professionalism and complexity. It is very difficult to ascertain the causal relationship, so it seems that the burden of proof on the infringed party can be lightened through the reversal of the burden of proof for causation. The EU had envisaged promoting a complete presumption of causation in AI torts, but soon abandoned it. Although a complete reversal of the burden of proof for causation can provide relief for the losses of the infringed party, it will also have significant negative economic effects, such as increasing the legal costs of AI enterprises and leading to abusive litigation; in the long run, it is also not conducive to increasing the overall welfare of consumers, as the increase in enterprise costs will ultimately be reflected in the price of products or services, and will also reduce consumers' opportunities to access new products for free. The appropriate approach is to limit the reversal of the burden of proof to specific circumstances to avoid excessively hindering the development of the AI industry.
The legal order should establish a rule for the presumption of causation in AI tort disputes between enterprises and consumers. The AI ecosystem presents highly networked and professional characteristics. Consumers outside the AI ecosystem find it difficult to understand the system's operating logic and to prove the existence of a causal relationship between the enterprise's actions and their own damages. In contrast, AI enterprises are in a favorable position to prove the absence of a causal relationship. The legal order should allocate the burden of proof to the party most likely to ascertain the truth. Article 10(4) of the EU's "Defective Products Liability Directive" stipulates that when the court determines that, due to the complexity of the technology or organization, the causal relationship is difficult to ascertain, a presumption of causation can be made based on the consideration of consumer protection. When an AI application infringes on the legitimate rights and interests of consumers, a causal relationship should be presumed, and the defendant should bear the burden of proving the absence of a causal relationship, thereby resolving the evidentiary challenge and increasing public trust in the application of AI. In contrast, if it is a dispute between enterprises or between individuals, the classic rule of evidence of "he who asserts must prove" should still be applied. In AI tort disputes between individuals, there is no need to favor one party over the other; in relevant disputes between enterprises, the plaintiff is not necessarily in a weak position regarding the burden of proof. Taking an environmental tort dispute between enterprises as an example, the plaintiff may be more capable of proving the existence of a causal relationship than the defendant. In the case of "Tongcheng Coal Storage and Transportation Co., Ltd. v. Yiqiang Packaging Products Co., Ltd. for environmental pollution damage compensation dispute," Yiqiang Company claimed that the dust produced by the coal plant polluted its production of pharmaceutical packaging products, causing quality problems in its pharmaceutical products. As a producer of pharmaceutical products, Yiqiang Company was more capable of proving the effect of dust on the quality of its products, but because the case was an environmental tort dispute, the court required the defendant, Tongcheng Coal Storage and Transportation Co., Ltd., to bear the burden of proof and the adverse consequences of being unable to do so. A one-size-fits-all legislative reversal of the burden of proof for causation may be biased. Similarly, in an AI tort dispute between enterprises, the plaintiff is not necessarily in an unfavorable position regarding the burden of proof compared to the defendant. Only when an AI tort dispute occurs between an enterprise and a consumer should a causal relationship be presumed.
It should be noted that even if the rule of presumed causation is applied, the plaintiff should at least prove the existence of a connection between the defendant's act and their own damage. If the plaintiff cannot point out this possibility at all, the case cannot enter the litigation process. In environmental tort cases, the Supreme People's Court has pointed out that the plaintiff should provide evidence of the connection between the defendant's act and the damage, and based on factors such as the method of polluting the environment and destroying the ecology, the nature of the pollutants, the type of environmental media, the characteristics of ecological factors, the time sequence, and the spatial distance, comprehensively judge whether a connection between the defendant's act and the damage is established. In AI tort disputes, the infringed party should also prove a certain degree of connection between the defendant's act and the damage, such as reports of damage caused by the same batch of products or the absence of other abnormal factors, so that the case can smoothly enter the judicial process.
In summary, China's future "Artificial Intelligence Law" should, through special rules for information disclosure, determination of mental harm, and presumption of causation, lighten the burden on the infringed party to prove liability. The specific articles can be expressed as follows:
Article n: If a relevant entity of an artificial intelligence system infringes upon the civil rights and interests of another person due to fault, it shall bear tort liability. If the infringed party, after exhausting all appropriate efforts, is still unable to obtain evidence related to the infringing act, they may apply to the people's court, and the people's court shall rule that the relevant entity of the artificial intelligence system disclose the recorded and saved information.
Article n+1: If a relevant entity of an artificial intelligence system infringes upon the personal rights and interests of a natural person and causes obvious mental harm, the infringed party has the right to request compensation for mental harm. If an artificial intelligence system infringes upon the legitimate rights and interests of a consumer, the actor shall bear the burden of proof that there is no causal relationship between the act and the damage.
4. Liability Allocation: Special Rules for Multiple Tortfeasors in AI for Damage Apportionment
After resolving the difficulties in determining liability for AI torts, it is still necessary to further discuss the issue of liability assumption. Considering the characteristics of AI, the following situation is likely to occur: when damage is caused by an AI application, the real tortfeasor is hidden in the complex chain of AI activities, ultimately leading to the "disappearance of the tortfeasor."
The rule of common danger liability is not applicable to situations where the source of AI infringement liability is unclear. The rules for multiple tortfeasors stipulated in Article 1168 and the following articles of the "Civil Code" are intended to facilitate the evidentiary burden of the infringed party. Article 1170 of the "Civil Code" stipulates: "Where two or more persons engage in acts that endanger the personal and property safety of others, and the act of one or some of them causes damage to others, and the specific tortfeasor can be determined, the tortfeasor shall bear the liability; if the specific tortfeasor cannot be determined, the actors shall bear joint and several liability." This article aims to solve the problem of liability allocation when the tortfeasor cannot be ascertained, and seems applicable to situations where the source of AI-induced damage is unclear. However, the rule of common danger liability requires that all actors have engaged in acts that violate their duties, that is, multiple actors have all created a legally condemned danger, but only one person's danger ultimately transforms into damage. A typical example is when multiple people shoot towards a road, and one bullet hits a passerby. The acts of shooting towards the road all violate the duty of care, and all acts are legally condemnable. This is the legal basis for the legal order's strict treatment of the actors. However, in AI applications, the acts of the participants are not necessarily condemnable. If it cannot be determined whether all participants have engaged in acts that violate their duties, the rule of common danger liability cannot be applied merely because the source of liability cannot be ascertained.
4.1 The Rule of Joint and Several Liability for the Same Commercial Technology Unit
Tort law is not only based on acts, but can also require relevant entities to bear legal liability based on a specific state of combined interests. Employer liability, guardian liability, etc., in the "Civil Code" all arise because of the existence of a close combination of interests, where the employer or guardian should bear legal liability for the acts of others. The liability for high-altitude throwing of objects stipulated in Article 1254 of the "Civil Code" is also a type of tort based on a combined state. It's just that the users of a building form an accidental combined relationship due to residency, and the residents are likely not familiar with each other. When the specific liable person cannot be determined, requiring possible injuring building users to provide compensation merely based on a loose neighborly relationship cannot gain widespread public approval. In contrast, if enterprises are joined together through contracts or other agreements, have common commercial interests and a technical foundation, even if each commercial unit remains independent, when the liable entity cannot be ascertained, requiring each member to share the risk also has a rational basis.
When the source of liability cannot be ascertained, legislation can require participants who constitute the same commercial technology unit to bear joint and several liability for the damage. To prevent liability from being hidden in the commercial chain, when multiple entities closely cooperate on the basis of a contract or similar arrangement to form a commercial technology unit, and the infringed party can prove that the commercial technology unit caused the damage, even if it cannot be proven which specific link within the unit had a problem, the members of that commercial technology unit can still be held jointly and severally liable for the damage. For example, a smart security system produced by manufacturer A is installed in a smart home system designed by B, and this system runs on an AI ecosystem developed by C. There are contractual agreements among the three, and they jointly form a commercial technology unit. When a burglary occurs, the smart security system fails to activate, causing significant losses to the user. Although it is unclear which link failed, it is certain that the loss was caused by the smart security system's failure to fully perform its function. At this time, the members of the same commercial technology unit should be jointly and severally liable for the loss. Market entities that voluntarily combine for a common commercial purpose can control interaction and interoperability risks through contracts or similar means, and can reach an agreement in advance on the allocation of accident costs. Holding members of the same commercial unit jointly and severally liable will not trigger collective resistance. Taking advantage of the concept of the same commercial technology unit can both prevent liability from becoming boundless and effectively solve the problem of liability allocation when the source of liability cannot be ascertained.
Requiring members of the same commercial technology unit to bear joint and several liability for damages is an active response of liability law to the evolution of business models in the age of artificial intelligence. With the deep penetration of AI and digital technology into society, business models are also changing. A new type of business structure between organizations and contracts is taking shape. According to transaction cost theory, if the cost of coordinating civil subjects through the market (such as the cost of discovering prices and negotiation costs) is greater than the coordination cost of using an integrated organization, an integrated organization, i.e., a firm, will emerge to reduce transaction costs. With the support of network information technology, market entities hope to maximize the advantages of both the market and the organization, leading to the emergence of a third type of business model between contracts and organizations, the "hybrid model." In the hybrid model, each member is both interdependent and independent, reducing market search costs and default risks while ensuring the flexibility and creativity of the collaborators. The hybrid model is widely adopted by technology companies, and Chinese scholars have also timely emphasized the organizational and economic functions of contracts in the new era. However, the hybrid model has the side effect of concealing infringing acts and shielding the responsible parties. In the past, products or services were mostly provided by a clearly identifiable civil subject, and the accountability mechanism was relatively clear. In the hybrid model, AI products or services are all in an intelligent ecosystem, which is composed of designers, providers, system update or upgrade service providers, backend operators, and other members. There is a close connection and interaction among the members, and the acts are hidden within the complex ecosystem. Once an infringing event occurs, civil subjects outside the system cannot clearly ascertain the causal mechanism. The evolution of economic models improves production efficiency, but should not let the infringed party bear the risk of unclear liability brought about by changes in business models. If members of an intelligent ecosystem can escape liability by virtue of the complex ecosystem, it will create a perverse incentive: tasks that could have been completed by a single enterprise will be broken down as much as possible and assigned to multiple independent members, thereby obscuring the causal relationship and helping potential infringers escape legal liability.
4.2 Criteria for Identifying Members of the Same Commercial Technology Unit
The difficulty lies in how to define the members of the same commercial technology unit. Closely cooperating enterprises can pre-agree on internal liability sharing mechanisms through lower transaction costs, such as signing indemnity clauses, thereby avoiding being caught "unprepared." Even if they are to bear joint and several liability towards the infringed party, because the enterprises have already anticipated and arranged for the liability, the members will not have strong resistance to the liability sharing. The problem of high-altitude throwing liability will not occur among closely cooperating enterprises. Conversely, if the cooperation between enterprises is not close, and different enterprises in the industry chain do not have many opportunities for communication, the transaction costs of the communication and liability mechanism will be higher, and it will be impossible to coordinate liability sharing through contracts or other means. Once damage occurs, requiring each enterprise to bear joint and several liability is likely to be met with resistance. If an enterprise believes that it has no fault but should compensate for the damage, it may be deterred from further engaging in business activities and eventually withdraw from the market, which will have a clear negative impact on both the enterprise and the market. From this perspective, determining whether an enterprise belongs to the same commercial technology unit that should bear joint and several liability should essentially be a judgment on whether the integration between the enterprises is close enough, and whether the internal transaction costs are low enough to form a liability-sharing plan in advance.
The legal order should, based on factors such as the existence of a continuous contract or similar agreement, and the existence of commercial and technical dependence, measure the transaction costs between enterprises and judge whether they constitute the same commercial technology unit. Generally speaking, the mere existence of a continuous contract or similar agreement between civil subjects cannot indicate the existence of an interdependent and close cooperative relationship. Whether civil subjects constitute the same commercial technology unit should also be judged based on factors such as business operation strategies, mutual technical dependence and interoperability, and whether it is exclusive. Whether multiple entities cooperate under the same brand for the same market group (e.g., Tesla), whether they are closely interdependent with direct data connection (e.g., a car company and its backend data service provider), and whether the system operates through a proprietary agreement or a closed network (e.g., Alibaba and Ant Group), etc., will all affect the assessment and determination of whether they constitute the same commercial technology unit. In the aforementioned burglary case, companies A, B, and C cooperate continuously through contracts, and their commercial interdependence is extremely high. The collaborative AI system cannot function if any party leaves. At this time, the three should be identified as constituting the same commercial technology unit. If the smart security system fails to activate due to a temporary interruption in the network connection, the victim cannot demand that the network service provider bear joint and several liability with A, B, and C. Smart home devices generally do not require a dedicated network connection from a specific provider, and consumers will not believe that there is a special cooperative relationship between the network service provider and a specific AI ecosystem. Only in exceptional circumstances, such as when a smart home system has a special security agreement with a specific network service provider, and the latter provides a dedicated network connection line for the former, and the merchant also uses this as a unique commercial advantage to attract consumers, should the network service provider be recognized as joining the smart ecosystem and belonging to the same commercial technology unit as the other three parties.
A possible doubt is that within the same commercial technology unit, there may be a dominant entity, and the internal damage-sharing agreement may not be fair. A dominant enterprise, by establishing a closed-loop intelligent ecosystem through contracts, may cause small and medium-sized enterprises to become dependent on the system for survival and find it difficult to exit after a period of entry. At this time, the dominant enterprise will lead the entire intelligent ecosystem and shift the liability for damage compensation through contracts. However, this is not a unique problem of the intelligent system. A dominant traditional car company often requires, through agreements, that in cases where the cause of a defect cannot be ascertained, the component producer bears the liability for damage compensation, or requires the component producer to bear full responsibility for a vehicle recall or accident. Regardless of the internal arrangement, the same commercial technology unit should still bear joint and several liability towards the infringed party. As for whether the internal liability-sharing agreement is fair, it should be examined in conjunction with relevant norms such as the rules for standard form contracts in the "Civil Code" to prevent dominant enterprises from unreasonably squeezing weaker enterprises.
In summary, the legal order should design special rules for multiple tortfeasors to resolve the problem of damage sharing caused by the unclear identity of the liable party. The specific clause can be stated as follows:
Article n+2: If damage is caused by an artificial intelligence system and the specific tortfeasor cannot be determined, the members of the closely cooperative and interdependent same commercial technology unit shall bear joint and several liability.
5. Conclusion
In the face of the challenges brought by the application of artificial intelligence, the legal community should not be too passive, but should actively think about appropriate countermeasures. Expanding product liability to AI systems cannot effectively solve the problems of proving fault and causation, the difficulty of accommodating new types of damages, and the unclear identity of liable parties in torts caused by AI applications. Jurists should maintain necessary caution and not rashly break conceptual boundaries for the sake of temporary regulatory needs, which would ultimately harm the systemicity of the law. China's future AI legislation should, on the premise of maintaining systemic consistency, adopt a "patchwork" approach by configuring rules, through information disclosure rules, rules for determining damages and causation, and special rules for multiple tortfeasors, to reduce the evidentiary burden on the infringed party at the levels of liability establishment and assumption.
It should be added that although the above new rules are beneficial to protecting the infringed party, they will also be questioned for increasing the legal costs of enterprises. The above rules focus on balancing the litigation power between the two parties and are much milder compared to applying strict liability to the defendant. The strict liability model has a strong symbolic meaning, puts great psychological pressure on market entities, and can easily trigger a chilling effect. If product liability is applied to AI systems, some small and medium-sized enterprises may not dare to engage in AI research and application out of fear of strict liability. Enterprises that have already entered the AI field, in order to avoid "getting burned," will also try to delete open-source code or ensure system controllability at the cost of sacrificing learning ability. In comparison, the new tort rules directly targeting the characteristics of AI such as opacity and network-based nature are more focused on procedural communication. As long as the defendant actively assists in ascertaining the truth, it is still possible to rebut the legally presumed liability assumption mechanism. The rules for facilitating proof under fault liability will incentivize liable entities to actively discover organizational or technical loopholes and better achieve AI safety governance.