Location : Home > Resource > Paper
Resource
Principles of Attribution and Fault Determination in Generative Artificial Intelligence Tort
2025-10-21 [author] Wang Liming preview:

[author]Wang Liming

[content]


Principles of Attribution and Fault Determination in Generative Artificial Intelligence Tort


Wang Liming

First-Class Professor of Liberal Arts, Research Fellow at the Civil and Commercial Law Research Center, Renmin University of China

 

Abstract: Accurately determining the tort liability of generative AI service providers first requires defining the nature of generative AI service providers in private law. Different types of AI cause damage to people, its liability should be determined differently. From the perspective of encouraging the development of artificial intelligence, it is not appropriate to impose strict liability on generative artificial intelligence service providers, but to apply fault liability to them.The determination of the fault of the service provider should be based on an objective standard of duty of care, taking into account the level of existing technology,the cost of preventing damage, and distinguishing between the fault of the service provider and the fault of the user. Although the harm caused by generative AI is not fully equivalent to network infringement, it is still necessary to apply by analogy the Notice-to-Delete rule in the network infringement liability rules.In determining whether fault is established, it should be considered whether the service provider has taken necessary measures in time to prevent the occurrence of damages when notified by the right holder.

Keywords: Generative Artificial Intelligence; Principle of Liability; Fault; Notice-to-Delete Rule

 

The emergence of DeepSeek signifies that China has fully entered the era of artificial intelligence, as AI has become a key driving force behind the new wave of technological revolution and industrial transformation. The rapid rise of the “AI+” paradigm is profoundly reshaping modes of production, lifestyles, and systems of social governance. However, the development of generative artificial intelligence has also brought new challenges to the application of tort law. Recently, the Guangzhou Internet Court and the Hangzhou Internet Court respectively rendered judgments in two cases concerning the infringement of copyright by generative AI service providers involving the image of “Ultraman” (hereinafter referred to as the Guangzhou Ultraman Case and the Hangzhou Ultraman Case).

In both cases, the courts primarily resolved the disputes by applying the general principles of tort liability law, together with the Copyright Law of the People’s Republic of China and the Interim Measures for the Administration of Generative Artificial Intelligence Services (hereinafter referred to as the Interim Measures). This demonstrates that, with the detailed provisions of the Civil Code of the People’s Republic of China on network torts, combined with the general principles of tort liability, the existing legal framework is, on the whole, capable of addressing the challenges posed by generative artificial intelligence. Therefore, there is no pressing need for major adjustments to current rules, nor for extensive amendments to the Tort Liability Part of the Civil Code.

Nevertheless, this does not mean that no adjustments are required. Significant controversy remains, particularly concerning the principles of liability attribution and the determination of fault on the part of network service providers. It should be noted that generative artificial intelligence differs fundamentally from traditional online services, especially regarding its operational principles and the basis for liability determination. For instance, when generative AI causes harm, should the establishment of liability for the service provider require proof of fault? If so, how should such fault be determined? These are the central questions this paper seeks to explore.

 

1. Principles of Liability Attribution in Generative Artificial Intelligence Torts

1.1 Defining the Legal Nature of Generative Artificial Intelligence Service Providers

Generative Artificial Intelligence (AI) service providers refer to organizations or individuals that use generative AI technology to provide generative AI services (including through programmable interfaces or similar means). According to China’s current legislation, the earliest regulation to define the concept of generative AI service providers is the Interim Measures for the Administration of Generative Artificial Intelligence Services. Before the promulgation of the Interim Measures, Chinese laws and regulations did not specifically stipulate this concept. Although the Interim Measures prescribe the obligations and responsibilities that generative AI service providers must assume, such obligations and responsibilities are defined primarily from the perspective of public law.

Therefore, the determination of tort liability for generative AI service providers must still find its legal basis in the Civil Code, particularly the Part on Tort Liability and other special private laws. Pursuant to the provisions of the Civil Code on tort liability, accurately determining the tort liability of generative AI service providers first requires clarifying their legal nature under private law. The author believes that the nature of generative AI service providers should be understood from the following aspects:

First, generative AI service providers should be regarded as service providers rather than product manufacturers. Some scholars hold that generative AI service providers should bear producer liability for the content generated by AI, meaning they should be considered as product manufacturers and therefore subject to product liability. Others argue that generative AI, in essence, constitutes a service rather than a product, and that its providers should bear service liability rather than product liability. Article 9 of the Interim Measures stipulates that “providers shall bear the responsibility of network information content producers in accordance with the law,” a provision that mirrors, in logic, the producer liability framework under the Product Quality Law, which provides that “producers shall be responsible for the quality of the products they produce.”

The author contends that generative AI service providers should indeed be classified as service providers rather than product manufacturers. On the one hand, in terms of form, the provision of generative AI more closely resembles a service model than a product sales model. On the other hand, in terms of liability determination and assumption, classifying generative AI service providers as product manufacturers would subject them to strict liability. The imposition of such heavy liability would, in turn, hinder the development of the AI industry.

Second, the category of generative AI service providers is broad, and their legal nature should be determined differently in different circumstances, with corresponding differentiated regulation. For example, generative AI may involve multiple actors—developers, data providers, service providers, and users—raising questions as to how liability should be allocated among them. Some scholars have noted that, unlike the traditional distinction among “technical supporters, service providers, and content producers,” the generative AI industry exhibits an integration of technical support, service provision, and content production.

This means that the concept of generative AI service providers encompasses a wide range of roles, including both model developers (such as OpenAI, which provides foundational model services) and operators who deploy and manage these models. Accordingly, the EU Artificial Intelligence Act does not adopt an overly broad concept of “AI service provider”; instead, it differentiates AI system operators into providers, deployers, importers, distributors, product manufacturers, and authorized representatives.

Therefore, when determining the tort liability of generative AI service providers, it may be necessary to distinguish among different types of service providers—such as training data providers, AI developers, and providers of different types of AI services—and to determine each party’s civil liability according to the specific cause of the infringing act, rather than attributing all liability to the “generative AI service providers” as broadly defined in Article 22 of the Interim Measures.

The author argues that the concept of “generative AI service provider” in the Interim Measures is overly abstract and broad, and thus requires a more refined classification. In cases of network infringement, the liable party is the network service provider, who has a direct relationship with the victim. However, generative AI infringement often involves multiple actors and stages—such as data collection, data training, and content generation—each potentially conducted by different entities. Therefore, each actor should bear distinct liabilities according to their respective degree of fault.

Third, generative AI service providers typically constitute new types of network service (content service) providers. Some argue that because generative AI service providers may not always rely on the Internet to deliver their services, they should not be considered network service providers. This view, however, is debatable. In most cases, generative AI service providers deliver their services through the Internet and thus share certain similarities with traditional network service providers.

Nevertheless, generative AI generates information through interactive engagement between users and AI, rather than through the provider’s unilateral dissemination of information or direct publication by users or third parties. Hence, generative AI service providers are neither traditional content service providers nor technical service providers, but rather constitute a new hybrid form of network service (content service) provider.

In theory, network service providers are generally divided into content service providers and technical service providers, whose liability differs in scope and determination. The question, then, is whether generative AI service providers should be classified as content service providers or technical service providers. Some scholars argue that they fall under the former, while others claim that they are closer to the latter. Still others maintain that generative AI service providers are neither traditional content nor technical providers, but a new type of network service provider that merits an independent classification.

The author contends that technical service providers mainly offer technical services and do not directly supply information content, and therefore have limited obligations to review such content. By contrast, generative AI service providers do more than merely provide a platform or technical means for users to publish information; they also generate information content based on user instructions, which gives them a content-providing function. However, since they do not proactively disseminate information but generate it in response to user prompts, they differ from conventional content service providers.

Accordingly, generative AI service providers occupy an intermediate position between content service providers and technical service providers. In determining their tort liability, the particular characteristics of their services should be taken into account. Of course, if a generative AI service provider independently publishes information content, it should then be classified as a content service provider, and its liability should be determined in accordance with the rules applicable to such providers.

1.2 Generative Artificial Intelligence Service Providers Should Not Bear Strict Liability

There is ongoing debate regarding the type of liability that should be imposed on generative AI service providers—specifically, whether they should be subject to fault liability or strict liability. Some renowned tort law scholars abroad have advocated for the adoption of strict liability. For example, Professor Gerhard Wagner of Humboldt University argues that when ordinary users cause harm through the use of AI products, software designers or product manufacturers should bear corresponding product liability, since they have greater knowledge of AI products and are thus better positioned to effectively prevent infringing incidents. Furthermore, AI technologies make products and services more efficient and intelligent, reducing the probability of accidents; therefore, even under strict liability, the overall burden on providers and users would be significantly mitigated.

In China, some scholars also argue that generative AI is essentially a product, and that its service providers should bear product liability. Article 9 of the Interim Measures likewise refers to the producer liability principle under the Product Quality Law when determining the responsibility of service providers—effectively applying a strict liability approach to such providers.

Although the adoption of strict liability can help distribute risk and loss while alleviating public anxiety toward AI, it must be recognized that “artificial intelligence” is itself a very broad concept, and the determination of liability should differ according to the type of AI involved and the manner in which harm is caused.

For instance, damages caused by autonomous vehicles or drones cannot be equated with damages caused by generative AI. The author suggests dividing AI into two categories: physically interactive AI and information-interactive AI, which differ in the manner by which they cause harm. Physically interactive AI may involve physical entities such as rail transport systems, aircraft autopilot devices, robots, robotic dogs, autonomous driving systems, and industrial robotic arms—where the provider may be subject to high-risk liability. By contrast, information-interactive AI causes harm through the output of information.

In the author’s view, whether strict liability should apply in the former category is open to discussion. However, for the latter—generative AI—it is inappropriate to impose strict liability on service providers, for the following reasons:

First, the principle of liability attribution for harm caused by generative AI should align with the need to encourage the development of artificial intelligence.

At the current stage of China’s development, a prudent yet inclusive approach should be adopted toward generative AI, aiming to promote its advancement as much as possible. If strict liability were imposed on generative AI service providers, such that they would be held liable for all damages regardless of fault, it would unduly increase their liability burden and thereby hinder industrial growth.

(1) Under strict liability, providers must bear responsibility whenever damage occurs. Although this benefits victims’ protection, it would unduly restrict AI research and application. All innovation carries inherent risks—if innovators face excessive liability for every risk, they may be deterred from engaging in bold and transformative research.

(2) When strict liability is applied in contexts where the scope of liability is uncertain, the resulting high costs of AI development and excessive burdens on service providers may lead to “catastrophic liability risks.” While large corporations might withstand such risks, small and medium-sized enterprises (SMEs) would likely be unable to bear them.

(3) Compared with fault liability, strict liability disregards the presence or absence of fault, and thus judges need not ascertain whether the designer or developer adopted the optimal level of precaution in AI design and testing. However, this may inadvertently restrict technological innovation. The prevailing academic view holds that the tort liability regime governing AI decisions should promote both safety and innovation in AI technologies and that, in the future, liability risks arising from AI should be mitigated through insurance mechanisms. Nevertheless, strict liability does not adequately balance the freedom to research and use AI technologies with the protection of victims’ rights and interests.

Second, the principle of liability attribution for harm caused by generative AI should remain consistent with the liability principles established in the Civil Code of the People’s Republic of China.

On the one hand, generative AI itself does not present any special danger that would justify strict liability, and imposing such liability on service providers would be inconsistent with the theoretical foundation of strict liability. The theoretical bases for strict liability include risk creation/control, compensation, damage prevention, and protection of vulnerable victims. The primary policy objective of strict liability is to achieve distributive justice by allocating “unfortunate losses” fairly, exempting victims from the burden of proving the tortfeasor’s fault, thereby facilitating compensation and preventing tortfeasors from evading liability. In China’s Civil Code, strict liability is closely associated with the protection of personal rights and interests. However, generative AI—being a typical information-interactive AI—does not pose a serious threat to personal safety, health, or life, nor does it directly implicate the protection of such rights. Therefore, the imposition of strict liability is unwarranted.

On the other hand, the opacity of AI algorithms means that when the outcomes and potential impacts of AI decisions are unpredictable, the preventive function of strict liability—designed to incentivize developers to take precautionary measures—loses its practical foundation. Strict liability only makes sense when the designers or users of AI can foresee the potential harmful effects of AI systems and take preventive actions accordingly. As AI systems become more opaque and less predictable, strict liability becomes ineffective, since its validity depends on the tortfeasor’s control or foreseeability of their behavior. This foundational condition does not hold true in the context of AI.

Third, the principle of liability attribution for generative AI should correspond to its application scenarios.

As a general-purpose technology, AI is used in a wide range of contexts, each with vastly different levels of risk and transparency. The operational characteristics of generative AI involve three stages: data input (requiring massive data collection for training), data processing (training through various algorithms), and data output (generating results based on user prompts and stored data).

Because the output results of generative AI are highly uncertain, applying strict liability to its operators would excessively increase their legal burden. Even within the European Union’s stringent regulatory environment, the Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies adopts a two-tier framework, distinguishing between high-risk and low-risk AI.

The report identifies two criteria for high-risk AI, namely operation in non-private environments and the likelihood of causing significant harm. In China, disputes arising from generative AI typically involve violations of personality rights or intellectual property rights by generative AI service providers. For such disputes, the Civil Code continues to apply the fault liability principle. By applying Article 1165, paragraph 1 of the Civil Code, most practical disputes can be adequately resolved. Therefore, there is no need to establish a special strict liability regime for generative AI service providers.

Fourth, the principle of liability attribution for generative AI should correspond to its service-based, rather than product-based, nature. Some scholars argue that AI-related torts should be governed by strict liability on the grounds that harm caused by generative AI exhibits characteristics of product liability. Article 9 of the Interim Measures similarly adopts the product liability framework under the Product Quality Law when determining the liability of service providers, thereby implicitly applying product liability rules.

However, as previously discussed, generative AI service providers primarily offer services rather than products, and product liability provisions should not apply. In product liability, the liable party is typically the one most capable of controlling product risk. Yet, in the context of AI operations, this assumption often fails: the operation of generative AI depends on algorithms whose functioning is inherently uncontrollable. Imposing product liability on generative AI service providers would therefore overextend their liability.

Moreover, the operation of generative AI depends not only on the service provider’s technology but also on the user’s prompts and even potential third-party cyberattacks. Hence, the resulting harm is often the product of multiple actors and factors. Product liability, in contrast, primarily protects the personal safety and property of consumers. Generative AI, however, mainly provides informational services that do not directly endanger users’ personal or property safety, making the application of product liability inappropriate.

The discussion of the principle of liability attribution for harm caused by generative AI ultimately demonstrates that strict liability should not apply to generative AI service providers. In China’s judicial practice—such as the Guangzhou Ultraman Case and the Hangzhou Ultraman Case—courts have continued to apply the fault liability principle rather than strict liability. This indicates that judicial practice likewise rejects the application of strict liability to generative AI service providers.

1.3 Fault Liability as the General Principle of Attribution

The adoption of fault liability for damages caused by generative artificial intelligence is consistent with the system of attribution principles under the Civil Code of the People’s Republic of China. The reasons are as follows:

On the one hand, the principle of fault liability aligns with the current need to encourage the development of the artificial intelligence industry in China. Fault liability can effectively balance the protection of freedom of conduct and the protection of legal rights and interests. It also meets the need of generative AI service providers to control risks, granting them exemption opportunities when risks cannot be controlled under existing technological conditions, and encouraging them to face technological challenges during AI development. Limiting the scope of liability for service providers is more conducive to the wider application of AI. In principle, AI service providers should only bear fault liability, and their duty of care can serve as a “controller” of the scope of tort liability—balancing users’ freedom of conduct and the protection of victims’ rights and interests, while also coordinating the internal allocation of responsibility between users and providers.

On the other hand, under the principle of fault liability, if an AI service provider fulfills the duty of care expected of a reasonable person, it should not bear tort liability; otherwise, it should bear compensation liability for damages caused by artificial intelligence. Of course, if both the generative AI service provider and the user are at fault (for example, when a user designs a series of misleading prompts, and the generative AI lacks necessary safeguards for privacy and personal information protection, resulting in the leakage of someone’s personal information), then, depending on the specific circumstances of the case, Articles 1168 to 1172 of the Civil Code should apply, leading to joint liability or several liability.

It should also be noted that when generative AI is of the information-interaction type, the objects it may infringe are civil rights and interests with personality interests or intellectual achievements as their objects, such as portrait rights, name rights, reputation rights, and copyright. This determines that, in accordance with the Civil Code, the Copyright Law, and other relevant provisions, the harm caused by generative AI service providers should be governed by the general principle of fault liability.

2. Determination of Fault in Generative Artificial Intelligence Infringement

2.1 The Standard of Fault: Violation of the Duty of Care

Once the principle of fault liability is adopted, the first issue that arises is how to determine whether the service provider is at fault, and what standard should be used to make such determination. Modern tort law theory generally adopts the theory of objective fault—that is, to determine whether the actor has violated an objective duty of care as the standard for assessing fault. In Anglo-American tort law theory, negligence is typically understood as the failure to fulfill a duty of care. Similarly, the determination of fault for generative AI service providers should adopt an objective standard of duty of care. The core function of the duty of care theory is to serve as an external standard for assessing whether an actor is at fault.

In China’s judicial practice, courts also mainly determine the fault of service providers by examining whether they have violated their duty of care when assessing their tort liability. For example, in the Guangzhou Ultraman Case, the court held that “service providers shall exercise reasonable duty of care when providing generative AI services. However, in this case, the defendant, as a service provider, failed to fulfill such reasonable duty of care.” Similarly, in the Hangzhou Ultraman Case, which involved the issue of whether the service provider constituted contributory infringement, the court held that a service provider only constitutes contributory infringement when it is at fault for the user’s infringing conduct.

In determining fault, the court proposed that multiple factors should be dynamically considered to adjust the standard for determining fault. Specifically, it should be assessed according to the “reasonable person standard within the same industry.” When a generative AI service provider can prove that a reasonably attentive provider in the same industry could not have discovered that the generated content might constitute infringement, or that it has already taken necessary measures consistent with the technological level at the time of damage to prevent harm but still could not prevent its occurrence, it should be deemed to have fulfilled its reasonable duty of care and therefore be free of fault.

Ultimately, after comprehensively considering factors such as the nature of the service, the notoriety of the copyrighted work, the obviousness of the alleged infringing act, the potential consequences of the infringement, and the platform’s profit model, the court held that the service provider should have known of the user’s infringing act but failed to take necessary measures, thereby being at fault and constituting contributory infringement. Assessing the fault of AI service providers based on an objective duty of care helps enhance their ability to predict the legality of their conduct and potential civil liability, thus safeguarding their freedom of conduct and business autonomy while encouraging them to develop AI technology within the bounds of a certain objective duty of care.

2.2 Criteria for Assessing the Objective Duty of Care of Service Providers

In cases of infringement involving generative artificial intelligence, how should the standard of care for service providers be defined, and what specific obligations does it include? The author believes that the duty of care for generative AI service providers mainly includes the following aspects:

First, whether reasonable verification obligations have been fulfilled regarding the reliability of information sources. The premise of generative AI output is the existence of relevant data, and the quality of training data directly determines the reliability of generated results. Due to the limitations of current technology, generative AI service providers cannot guarantee that generated content is entirely objective or accurate; however, such technological limitations cannot serve as an “unlimited” exemption from liability. Service providers should adopt reasonably reliable technical solutions and information systems. Especially when the service provider and developer are separate entities, ensuring a minimum level of AI service functionality is a necessary requirement for achieving the purpose of an information service contract.

Generative AI service providers should fulfill a certain level of verification duty regarding their information sources to ensure the reliability of generated results. Since generative AI services exhibit characteristics of both content providers and technical service providers, the determination of their verification duty should take into account the nature of their services. However, it should be noted that, unlike other online services, generative AI operates based on massive amounts of data, most of which are publicly available. To ensure sufficient data input, the verification duty of AI service providers should be appropriately reduced, and they should not be imposed with an excessively onerous obligation to examine every information source.

Second, whether the generated results contain clearly infringing content.

According to Article 1197 of the Civil Code, where a network service provider knows or should know that a network user is using its service to infringe upon others’ civil rights and interests, it shall promptly take necessary measures to prevent the occurrence or expansion of damage. Although this rule applies to cases where network users use online services to commit infringement, it can also be applied by analogy to determine the duty of review of generative AI service providers regarding information content.

Generative AI operators should implement necessary filtering mechanisms during the content generation process to avoid producing clearly infringing content (such as obscene photos, fabricated information or actions suggesting sexual harassment, or insulting and defamatory statements). Strictly speaking, the cost of such compliance measures is not high and would not unduly burden generative AI operators.

This requirement mainly applies at the data input stage, where the service provider should conduct necessary screening of input data, such as by setting keyword filters to identify clearly infringing information. For instance, in cases involving large-scale collection of sensitive personal information, the provider should perform necessary screening or anonymization to prevent infringement incidents such as information leakage. When a service provider knows or should know, or has received a notification, yet fails to take necessary measures, it should be deemed not to have fulfilled its reasonable duty of care and to be at fault. The necessary measures here refer to reasonable preventive actions feasible under existing technological conditions, including filtering prompts, managing obvious or repeated infringing acts by users, prominently labeling content, and providing risk warnings.

Third, whether the service provider complies with relevant regulatory requirements. To ensure the lawful operation of generative AI, relevant Chinese legislation stipulates specific regulatory measures that service providers must observe—this is also an important basis for determining whether they have fulfilled their duty of care.

For example, in the Guangzhou Ultraman Case, the Guangzhou Internet Court, by referring to the Interim Measures, set forth the defendant’s reasonable duty of care, including establishing a complaint and reporting mechanism, warning users of potential risks, and providing prominent labeling. These obligations represent the statutory duties that service providers must fulfill in the course of providing services. Article 12 of the Interim Measures stipulates that service providers must label AI-generated images, videos, and other content. The purpose of such labeling is to inform users of the functions and limitations of the services and to make them aware of how the relevant content was generated. Additionally, listing the information sources of generated content allows relevant parties to understand that such content is merely generated based on existing online information and may not possess sufficient professionalism or accuracy.

Furthermore, according to Article 10 of the Interim Measures, service providers shall guide users to understand and use generative AI content rationally and scientifically, and must not use generated content to harm others’ image, reputation, or other legitimate rights and interests. Generative AI service providers must comply with these regulatory requirements when providing services; failure to do so indicates that they have not fulfilled their duty of care.

Fourth, whether the service provider takes necessary and timely measures to prevent harm upon receiving notice from the rights holder. If an individual discovers that AI-generated images, text, or other content infringe upon their reputation, privacy, or copyright and requests the service provider to take necessary measures such as deletion or filtering, but the service provider fails to act accordingly, the provider should be considered at fault. In such cases, pursuant to Article 1195, Paragraph 2 of the Civil Code, the service provider should be deemed to be at fault and may bear joint liability with the infringing user for the aggravated damages.

2.3 Determination of the Fault of Service Providers Should Take into Account the Existing Level of Technology

When determining whether a service provider is at fault, the existing level of technology should be taken into account. The existing level of technology refers to the technological level at the time the damage occurred—that is, whether the service provider took reasonable measures to prevent the occurrence of damage at that time. If the service provider failed to take reasonable technical measures to prevent or mitigate the occurrence of damage, it should be determined that the service provider was at fault.

The reason for adopting the technological level at the time of damage is mainly because the technological level of generative artificial intelligence may continue to evolve and change. Adopting the technological level at the time the damage occurred conforms to the characteristics of AI-related torts and helps avoid unduly aggravating the liability of service providers. Of course, as the level of technology advances, the duty of care required of service providers may also change. Therefore, the standards for determining fault should be adjusted in light of the current state of technological development.

Adopting the existing technological standard also requires judges to adopt a prudent and inclusive attitude when determining fault. At the current stage, in order to encourage the development of the artificial intelligence industry, adopting the existing technological standard in determining the fault of service providers is a relatively feasible approach. In judicial practice, some courts have adopted the existing technological standard when determining the fault of service providers.

For example, in the Hangzhou Ultraman Case, the court held that if a service provider can prove that it has taken necessary measures consistent with the level of technology at the time the damage occurred to prevent harm, but still could not prevent the harm, then it should be deemed to have fulfilled its reasonable duty of care and not be at fault. This position is reasonable. If service providers are required to meet a standard beyond the current level of technology, this may impose an excessive duty of care, potentially causing their liability to “slide” into the realm of strict liability and, to some extent, hindering the development of artificial intelligence.

When using the standard of existing technological levels to determine the duty of care of service providers, it is also necessary to clarify the connotation and criteria of the existing technological level itself.

In the author’s view, on the one hand, the existing technological level should refer to the prevailing industry standard, that is, the technical standard commonly adopted within the industry. It should also include the technical effects and software functions achievable at the current stage within the industry. It should be noted that there may be differences in technological levels among service providers. Generative artificial intelligence systems across different industries have sprung up like mushrooms after the rain, and the technological levels of AI in different industries vary widely, making cross-industry comparisons difficult. Thus, the level of care of generative AI service providers should only be compared within the same industry.

On the other hand, within the same industry, the level of duty of care should be consistent with international standards. At present, China’s artificial intelligence industry has reached a leading position in the world. If fault were determined according to standards lower than international levels, it would be detrimental to the optimization and iteration of China’s AI industry. Conversely, aligning with international standards helps encourage service providers to continuously improve their technological capabilities so that the services they provide meet or even surpass international technological standards. Moreover, adopting universally accepted technological standards also helps protect the rights and interests of victims and prevents service providers from unjustifiably invoking technological development as a defense. For example, in the Hangzhou Ultraman Case, the court applied a “reasonable person standard within the same industry,” that is, determining the duty of care according to the general technological level of the industry, which is a reasonable approach.

2.4 Determination of the Fault of Service Providers Should Take into Account the Cost of Preventing Damage

An important trend in the development of modern tort law is the introduction of the theory of risk prevention costs into the principle of fault liability, thereby achieving a rational distribution of liability from an efficiency perspective. In the study of tort law, Guido Calabresi, a leading scholar of modern tort law, proposed the concept of the “cheapest cost avoider,” arguing that modern tort law should assign liability to the party who can prevent the accident at the lowest cost.

In the field of artificial intelligence, some scholars, from the perspective of law and economics, argue that tort liability should be allocated according to the principle of risk—that is, the party who can best control and distribute risk should be the primary liable party for damages arising from AI applications. Only by assigning tort liability to those who can control and distribute the risks associated with AI applications can harm be better prevented and reduced, victims be effectively compensated, and the goals of tort law and AI law be jointly realized.

In the author’s view, generative artificial intelligence, as a technological tool, does not inherently pose particular dangers to personal or property safety. However, as with any technological advancement, new risks inevitably emerge. Generative AI may lead to large-scale leaks of personal information or infringe on intellectual property rights or personality rights. The key issue is how to prevent such risks and damages. From the perspective of encouraging the development of generative AI, service providers should not be subjected to an excessive burden.

Therefore, in determining whether a service provider is at fault, the cost of preventing damage should also be taken into account. On the one hand, under the existing technological level, if the cost of taking compliant preventive measures by the service provider far exceeds the potential damage to data subjects, it may be determined that the service provider is not at fault.

On the other hand, as the legal maxim goes, “The law does not compel the impossible.” If service providers are required to bear excessively high costs in preventing harm, this would impose an undue burden on them, which is inconsistent with the current cautious and inclusive attitude toward generative AI. For example, in the Guangzhou Ultraman Case, the court held that the defendant, without authorization, used the plaintiff’s copyrighted works to train its large model and generated substantially similar images, thereby infringing the plaintiff’s reproduction and adaptation rights. The defendant was ordered to cease the infringement and to take further measures, such as filtering relevant keywords.

However, the issue lies in determining which keywords are considered “relevant,” as the interpretation can be overly broad. Requiring service providers to filter too many keywords to prevent harm may unduly increase their burden. The author believes that if the cost of preventing damage is not taken into account and service providers are required to completely eliminate harm, this would not only unduly restrict generated content and limit the functionality of AI, but also impose excessively high costs on service providers, violating the principle of prudence and inclusiveness and hindering the development of AI technology.

If, under the existing technological level, filtering keywords cannot prevent all harmful outcomes, or if listing too many keywords cannot prevent harm, it should be determined that the service provider, under current technological conditions, has fulfilled its duty of care and is not at fault. Even if harm occurs, the service provider should not be held liable.

2.5 Distinguishing Between the Fault of Service Providers and That of Users

Generative artificial intelligence has an information-interactive nature, and the results it produces depend not only on data sources, AI models, and algorithms, but also on the commands and information input by users. If the information input by a user is false, the output generated by the AI system may also be false. Therefore, the establishment of AI-related torts may result from the combined actions of both the service provider and the user, necessitating a distinction between their respective liabilities.

The author believes that in cases involving AI-related torts, the responsibilities of service providers and users should be distinguished as follows:

First, where the service provider has fulfilled its duty of care. As mentioned earlier, the fault liability principle should be applied to the tort liability of service providers. Thus, even if a user uses generative AI to commit an infringement, if the service provider has fulfilled its duty of care—for example, by reviewing the information sources as necessary—it should be deemed not at fault. In this case, the infringement should be attributed to the user’s own conduct, and the user should bear tort liability.

For example, in the aforementioned two Ultraman cases, if a user intentionally imitates the artistic style of Ultraman and uses a large model to generate works similar to Ultraman, the user should bear the primary responsibility for the infringement, as the fault mainly lies with the user. Likewise, if a user instructs a generative AI system to collect personal historical data to invade others’ privacy or steal personal information for fraudulent purposes, the user should bear liability.

Second, where the service provider has not fulfilled its duty of care. During the training stage, if the service provider collects large amounts of data involving others’ privacy or trade secrets without consent and fails to anonymize such data during model training, it should be deemed at fault. During the output stage, in cases of AI-related infringement, if the service provider fails to exercise its duty of care, both the service provider and the user share joint fault for the infringement and should bear liability in proportion to their respective degrees of fault.

For example, when a user uses generative AI to generate information that infringes upon a third party’s rights, if the user intentionally induces the AI system—such as by deliberately increasing the frequency of certain words, altering their position in sentences, or providing words with semantic associations—or falsely accuses someone of sexual harassment to manipulate the AI into producing certain images or desired outputs, thereby causing the AI to generate “hallucinations,” such actions clearly indicate that the user is at fault. In such cases, if the service provider has also failed to fulfill its duty of care, the two parties constitute joint tortfeasors without common intent and should bear corresponding liability to the victim.

 

3. Analogical Application of the Notice-to-Delete Rule in Online Infringement

3.1 The Applicability of the Notice-to-Delete Rule

The operation of generative artificial intelligence relies on massive amounts of information and provides services to tens of thousands of users. Therefore, requiring generative AI service providers to conduct strict ex-ante reviews and promptly detect potential infringements in the services they provide is indeed difficult. Typically, it is only when users discover that the information generated by the AI constitutes an infringement and notify the service provider that the latter becomes aware of the existence of an infringing act.

In addition, generative artificial intelligence remains in the developmental stage and possesses inherent flaws such as “hallucinations,” which necessitate technical refinement and improvement. Its training corpora also need to be enriched with information provided by users. These characteristics of generative AI determine that service providers find it difficult to promptly identify infringing activities. This raises the question of whether the Notice-to-Delete Rule (also known as the “safe harbor rule”) stipulated in Article 1195 of the Civil Code of the People’s Republic of China can be applied by analogy.

It must be acknowledged that cases in which users suffer infringements of their rights and interests during the use of generative artificial intelligence do not fully conform to the legal structure of the safe harbor rule prescribed in Article 1195 of the Civil Code. The application of the safe harbor rule involves three parties: first, the network service provider or platform; second, the user who commits the infringing act; and third, the user whose rights and interests are infringed. When a user engaging in infringement publishes infringing content on a platform, and the affected user submits preliminary evidence requesting the platform to take necessary measures, the platform may be exempted from liability as long as it takes such measures. However, in the case where generative artificial intelligence outputs infringing content, the infringing information originates directly from the platform itself rather than from the user committing the infringing act.

This structural difference leads to a further issue: should generative AI service providers enjoy the special protection of the safe harbor rule? The reason Article 1195 of the Civil Code adopts the safe harbor rule is to reduce the cost of ex-ante content review for online service providers or platforms, while also safeguarding users’ business freedom and freedom of expression. In contrast, generative AI service providers should review the information generated by their own products or services in advance or implement certain filtering mechanisms to prevent infringements of others’ civil rights and interests.

Although there are certain differences between generative AI service providers and traditional network service providers, the author believes that it is still necessary to apply the Notice-to-Delete Rule in online tort liability by analogy to cases involving harm caused by generative AI. The main reasons are as follows:

First, generative artificial intelligence services essentially fall within the category of online services. Compared with other online service providers, the services provided by generative AI have certain particularities—especially since the information generated is provided to users rather than directly to the general public. However, because numerous and unspecified users can query generative AI systems or issue commands requiring them to provide information, the information generated by AI is, in effect, also made available to an indeterminate public, thereby possessing a certain degree of publicity. Therefore, in a broad sense, generative AI service providers can be regarded as falling within the scope of network service providers, and the general rules of online infringement may apply to them.

Second, applying the Notice-to-Delete Rule by analogy to generative AI service providers helps prevent the occurrence and expansion of harm. According to Article 1195 of the Civil Code, upon receiving a user’s notice, a network service provider shall take necessary measures such as deletion, blocking, or disconnection to prevent the occurrence or expansion of damage. The provision’s formulation of technical measures is open-ended, allowing network service providers to adopt diverse means to prevent further harm. In cases of AI-related infringement, applying the Notice-to-Delete Rule by analogy to service providers can likewise prompt them to adopt keyword filtering and other technical measures to prevent or mitigate damage.

Third, applying the Notice-to-Delete Rule by analogy provides a degree of protection for generative AI service providers. The core function of the safe harbor rule is to protect network service providers, encourage the development of the Internet, and promote innovation—legislative purposes that are equally applicable to generative AI technology. It should particularly be noted that, compared with other online service providers, generative AI systems process vast amounts of information. Service providers are typically unaware of whether such information processing may cause harm or whether their outputs may infringe upon others’ rights and interests. Furthermore, generative AI has an interactive nature, meaning that its outputs depend on the user’s inputs, and given the sheer number of users, AI systems themselves are also prone to “hallucination” phenomena.

Under such circumstances, imposing liability on service providers immediately upon the emergence of problematic outputs could subject them to excessive responsibility. Therefore, the Notice-to-Delete Rule should be applied by analogy to generative AI service providers, such that they are obliged to take necessary measures to prevent harm only after receiving notification from users. This approach not only aligns with the characteristics of generative AI services but also helps protect the interests of service providers and prevents undue expansion of their liability.

3.2 Specific Application of the Notice-to-Delete Rule

3.2.1 Analogical Application of the Notice-to-Delete Rule

The so-called analogical application refers to the situation in which, in the absence of explicit legal provisions for a specific case, adjudicators invoke and apply legal provisions governing analogous circumstances to cases not directly regulated by law but sharing similar characteristics. Simply put, analogical application means “when a matter is not directly regulated by law, the provision concerning a similar matter shall be applied by analogy.” It is a method of filling legal gaps—when the law provides no specific regulation, the relevant rules are applied by analogy to a particular issue.

When determining the liability of generative artificial intelligence (AI) service providers, there exist differing views as to whether the Notice-to-Delete Rule should be applied by analogy or directly applied.

In practice, the Guangzhou Ultraman Case involved direct infringement by the service provider. After the copyright holder notified the provider, the court held that the provider failed to take necessary measures and thus constituted infringement. In essence, the court analogically applied the Notice-to-Delete Rule. The Hangzhou Ultraman Case involved indirect infringement by the service provider. In that case, a user utilized the provider’s foundational model to further train a new model using an existing work and subsequently published the generated output on the platform. The court found that the platform knew or should have known of the infringing act and failed to take necessary measures, thus constituting infringement. It can be seen that this case actually applied the “knowledge rule” as prescribed in Article 1197 of the Civil Code. Therefore, in scenarios where generative AI causes harm, courts have divergent opinions on whether the Notice-to-Delete Rule should be applied by analogy.

The author believes that, given that Article 1195 of the Civil Code’s Notice-to-Delete Rule involves three parties—the infringing user, the network service provider, and the victim—whereas the scenario of harm caused by generative AI involves only two parties—the generative AI service provider and the victim—and the infringing content originates directly from the AI service rather than from a user’s infringing act, the situation does not fully conform to the literal wording of Article 1195.

However, from the perspective of teleological interpretation, Article 1195 of the Civil Code aims to reasonably balance the interests of network service providers and users, preventing providers from bearing an excessively heavy duty of prior review and allowing them to avoid liability by taking necessary measures upon receiving notice from the victim. This legislative purpose should likewise apply to generative AI service providers. Moreover, in a broad sense, generative AI service providers also fall within the category of network service providers. Therefore, the scenario in which generative AI causes harm may be analogically governed by Article 1195 of the Civil Code.

3.2.2 Special Issues in Applying the Notice-to-Delete Rule to Generative AI Infringement

First, as mentioned earlier, in cases of generative AI infringement, the relevant infringing content may be automatically generated by the AI based on users’ instructions or on information provided by users. In such cases, the typical three-party structure underlying the Notice-to-Delete Rule does not exist, which makes its application somewhat special. On one hand, the notifying party is usually the user, but the user may not necessarily be the victim. On the other hand, the infringing content is not published by other users through the online platform but is automatically generated by the AI system itself, further distinguishing its application.

Second, since there is no user directly committing the online infringement, when the content generated by generative AI constitutes infringement, the service provider, after receiving notice from the user, does not need to forward that notice to other users. Instead, it must take measures such as correcting the information or implementing keyword filtering to prevent the occurrence of further harm.

Third, in general online infringement, the infringing information is typically published by a third party, and the specific victim notifies the network service provider to remove the information. However, in the case of generative AI, the infringing results may be known only to specific users. When a user discovers false or harmful content relating to themselves, they may legally notify the AI service provider to delete such information. In this scenario, the counter-notice mechanism does not apply.

Fourth, unlike other online services, generative AI does not publicly disclose information to the general public but generates outputs for specific users according to their instructions. Therefore, when the Notice-to-Delete Rule is applied by analogy, once the service provider receives a user’s notice, it may be required not only to delete the generated content but also to take other measures to prevent the generation of similar infringing content in the future.

For example, if the infringing content is generated due to inaccuracies in the source data, the generative AI service provider should promptly correct or delete the relevant information, or anonymize the data to prevent further generation of infringing outputs arising from inaccurate content. Similarly, as Article 1195 of the Civil Code stipulates the need to take necessary measures such as deletion, blocking, or disconnection, the generative AI service provider should also adopt measures such as setting keyword filters or adding content labels to prevent the recurrence of similar infringements.

In summary, the Notice-to-Delete Rule should not be directly applied to generative AI infringement but rather analogically applied as a safe harbor mechanism. Under this mechanism, once a generative AI service provider receives an infringement notice, it should employ measures such as keyword filtering to block the output of related content. This approach enables generative AI service providers to prevent infringement risks at relatively low cost, aligning with the principle that liability should be borne by the party with the lowest cost of prevention—the cheapest cost avoider theory.

4. Conclusion

Risk is one of the core concepts in modern tort law. Scholars have pointed out that risk permeates all systems of tort liability. In the case of fault liability, risk serves as an important reference factor for determining fault; in the case of strict liability, risk constitutes the fundamental basis for the allocation of responsibility. Moreover, risk also influences matters such as the assumption and exemption of liability.

We have already entered the era of artificial intelligence, and the legal risks brought by AI may lead to infringements upon the rights and interests of victims, such as personality rights and intellectual property rights. These are inevitable risks accompanying the development of artificial intelligence. However, the risks posed by generative artificial intelligence are limited and entirely controllable. To encourage the development of the AI industry, it would be inappropriate to impose strict liability that requires service providers to bear all risks associated with artificial intelligence. On the contrary, the determination of liability for service providers should adopt a prudent and inclusive approach. The principle of fault liability, as the fundamental rule for balancing the protection of rights and the preservation of freedom of conduct, can impose necessary limits on the liability of service providers.

Fault should be determined based on an objective standard—namely, the breach of the duty of care. The Civil Code’s chapter on tort liability already establishes a relatively complete system of tort liability, particularly through the existing rules governing online infringement (including the Notice-to-Delete Rule and the knowledge rule), which can provide reference and guidance for determining the fault of generative AI service providers. The Notice-to-Delete Rule applicable to network service providers can be analogically applied to cases of infringement by generative AI service providers. The safe harbor rule can offer necessary protection for such providers, appropriately balancing the protection of victims’ rights with the encouragement of service provider innovation, thereby fostering the healthy development of the artificial intelligence industry.