[author]Zheng Zhifeng
[content]
Legislative Positioning of Artificial Intelligence Users and Its Three-Dimensional Regulation
*Author: Zheng Zhifeng, Academy of Science and Technology Law, Southwest University of Political Science and Law
Abstract: No matter what kind of subject structure the “ Artificial Intelligence Law” adopts, users always occupy an important position. The core is the judgment of usage behavior, and at the same time, they have an identity transformation relationship with developers and providers. Setting up a special chapter on“ Protection of the Rights and Interests of Artificial Intelligence Users” is in line with the legislative goal of the Artificial Intelligence Law and can convey to the world China's consistent people-oriented artificial intelligence governance concept. The rights and interests of artificial intelligence users include the right to know, the right to equality, the right to control, and the right to data. Users play an important role in preventing and managing artificial intelligence risks. The “Artificial Intelligence Law” should systematically and clearly define users’ obligations of reasonable use, risk management, supervision, and information provision, and fulfill the requirements of artificial intelligence ethical guidelines. Setting up “the responsibility of artificial intelligence users" helps improve the governance structure of artificial intelligence and ensure the realization of trusted artificial intelligence. In terms of administrative fines, the nature, type, and scale of users need to be comprehensively considered. At the same time, the civil tort liability of users should be defined by distinguishing between artificial intelligence product liability and application liability, artificial intelligence with different risks, and assisted and alternative artificial intelligence.
Key Words: Artificial Intelligence Law; Users; Protection of Rights and Interests; Obligation Con-tent; Assumption of Responsibility
In recent years, the development of artificial intelligence has been exceptionally rapid, and the governance of artificial intelligence has entered a new phase of large-scale centralized legislation. Internationally, the European Union officially adopted the world’s first “Artificial Intelligence Law” in 2024, leading a new wave of artificial intelligence governance. In October 2023, the President of the United States signed the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (hereinafter referred to as the “AI Executive Order”), comprehensively establishing the policy and legal framework for U.S. artificial intelligence governance. Domestically, in July 2023, the “2023 Legislative Work Plan of the State Council” proposed to “prepare to submit the draft of the Artificial Intelligence Law to the Standing Committee of the National People’s Congress for review,” a point reaffirmed in the “2024 Legislative Work Plan of the State Council.” Subsequently, the academic community successively proposed two drafts of suggested Artificial Intelligence Laws, one led by the Chinese Academy of Social Sciences, titled the “Artificial Intelligence Model Law (Expert Suggested Draft),” and the other jointly written by China University of Political Science and Law and other institutions, titled the “Artificial Intelligence Law (Scholar Suggested Draft).” At this point, China also ushered in a valuable opportunity to formulate its own “Artificial Intelligence Law.”
The formulation of an “Artificial Intelligence Law” first requires clarity on the regulatory targets—that is, identifying which subjects and activities related to artificial intelligence should be regulated. The types of entities involved in the artificial intelligence industry chain are highly diverse, including developers engaged in AI technology research and development, manufacturers producing AI products, companies offering AI products or services to the market, suppliers providing components or plugins to AI systems, individual users utilizing AI products or services, and third parties applying AI systems for derivative applications such as APIs. Determining which of these require regulation under the “Artificial Intelligence Law” warrants thorough consideration. In this regard, whether it is the EU’s “Artificial Intelligence Law,” the U.S. “AI Executive Order,” or China’s two suggested drafts of the “Artificial Intelligence Law,” they all explicitly identify users and user activities as key regulatory focuses. Based on this, this article delves into the subject of artificial intelligence users, discussing their rights, obligations, and responsibilities, aiming to contribute insights to the formulation of China’s “Artificial Intelligence Law.”
1. Legislative Positioning of Artificial Intelligence Users
Since the discussion of user regulation is set within the context of artificial intelligence legislation, the first step is to examine the subject structure of the future “Artificial Intelligence Law” and clarify the legislative positioning of users.
1.1 Basic Attributes of Artificial Intelligence Users
1.1.1 Subject Structure of the “Artificial Intelligence Law”
Regarding the subject structure of the “Artificial Intelligence Law,” domestic and international practices differ. The first approach adopts a binary model of providers and deployers. For instance, the European Union’s “Artificial Intelligence Law” adopts a subject structure dividing entities into providers and deployers. Providers are defined as those who develop artificial intelligence systems or general artificial intelligence models, or who have developed such systems or models and place them on the market or provide them under their own name or trademark. Deployers are defined as those who use artificial intelligence systems under authorized conditions.
The second approach adopts a ternary model of developers, deployers, and consumers. In May 2014, Colorado signed the United States’ first regulatory legislation on artificial intelligence, titled the “Consumer Protection in Interactions with Artificial Intelligence Systems Act” (hereinafter referred to as the “AI Consumer Protection Act”). This Act adopts a ternary subject structure of developers, deployers, and consumers. Developers refer to individuals or entities conducting business in the state and engaging in the development or substantial modification of artificial intelligence systems. Deployers refer to those conducting business in the state and deploying high-risk artificial intelligence systems. Consumers are individual residents of the state.
The third approach adopts a ternary model of developers, providers, and users. In April 2024, Japan’s Ministry of Internal Affairs and Communications (MIC) and Ministry of Economy, Trade, and Industry (METI) released the “Artificial Intelligence Operators Guide Version 1.0,” adopting a ternary structure of developers, providers, and users. Both suggested drafts of China’s “Artificial Intelligence Law” follow a similar approach to Japan.
In comparison, regardless of the model adopted, artificial intelligence users remain a central regulatory focus of the “Artificial Intelligence Law.” It is noteworthy that different legislative frameworks use slightly varied terminologies. For instance, the EU’s “Artificial Intelligence Law” employs the term “deployers,” the U.S. “AI Consumer Protection Act” mentions deployers and consumers, while China’s two draft suggestions and Japan’s “Artificial Intelligence Operators Guide Version 1.0” use the term “users.”
First, the “deployers” in the EU’s “Artificial Intelligence Law” can be considered equivalent to the “users” in China’s draft suggestions. This is because the EU’s initial draft of the “Artificial Intelligence Law” in April 2021 used the term “user.” By December 2023, the term was replaced with “deployers,” but the definition remained unchanged, still referring to individuals using artificial intelligence systems.
Second, the deployers and consumers in the U.S. “AI Consumer Protection Act” require specific analysis based on the Act’s content. Deployers are subject to two restrictive conditions: conducting business in Colorado and deploying high-risk systems. Compared to the deployers in the EU’s “Artificial Intelligence Law,” this category is narrower but can still be understood as users since the Act defines deployment as the “use” of high-risk artificial intelligence systems, with its core concept remaining usage behavior. As for consumers, this is a specific term referring to individuals who purchase, use goods, or receive services, which is not suitable as a standard regulatory subject type under the “Artificial Intelligence Law.”
1.1.2 Fundamental Characteristics of Artificial Intelligence Users
The necessity of regulating users under the “Artificial Intelligence Law” lies in their distinct characteristics.
First, group attributes. The development of artificial intelligence technology aims to benefit the public and empower the comprehensive development of the social economy. This goal cannot be achieved without users and usage behavior, as only through the usage stage can artificial intelligence technology truly transform into a driving force for societal progress. The preamble of the U.S. “AI Executive Order” states, “Artificial intelligence holds extraordinary potential, full of both promise and peril. Only through the responsible use of artificial intelligence can we address pressing challenges and make our world more prosperous, productive, innovative, and secure.” Unlike developers and providers who are situated at the forefront of the artificial intelligence industry chain, users are at its endpoint, representing the largest group in terms of scale and quantity. They are also the most authoritative in assessing the extent and quality of artificial intelligence applications. Given this massive user base, the benefits and risks of artificial intelligence technology are magnified, necessitating adequate attention under the “Artificial Intelligence Law.”
Second, interest attributes. Compared to developers and providers, the interests of users are more complex, as they may simultaneously be beneficiaries and victims of artificial intelligence technology. On one hand, users are the direct consumers of artificial intelligence products and services, enjoying the benefits of applications that make work, learning, and life safer, more convenient, and intelligent. According to the principle of “consistency between benefits and risks,” users should bear obligations and responsibilities for the external risks of artificial intelligence, such as purchasing insurance, compensating victims, and conducting regular maintenance, to achieve a balance of legal interests. On the other hand, users are often the group closest to artificial intelligence technology, situated on the front lines of its application. As such, they frequently endure the challenges of immature technologies and unknown risks, necessitating legal relief. Although the scenarios and subjects of these two interest distributions differ, users remain central to the distribution of interests, requiring explicit definition under the “Artificial Intelligence Law.”
Third, risk attributes. The risks posed by artificial intelligence in recent years have garnered widespread attention. One of the primary tasks of the “Artificial Intelligence Law” is to manage these risks and achieve lifecycle governance of artificial intelligence. For example, the EU’s “Artificial Intelligence Law” adheres to a risk-reduction logic consistent with its risk mitigation policy, namely, “identify risks—reduce risks,” presenting a risk-reduction orientation across regulatory systems, scopes, mechanisms, and concepts. Users, as both risk creators and controllers, are indispensable entities in artificial intelligence risk governance. On one hand, users can prevent known risks of artificial intelligence through responsible use and serve as discoverers of unknown risks, converting these into known risks or even ensuring known safety, thus allowing artificial intelligence technology to fulfill its positive potential. On the other hand, irresponsible or even abusive use of artificial intelligence by users is likely to accelerate and amplify its risks, endangering public safety and even national security. From the perspective of risk types, regardless of how safe artificial intelligence products or services are, users always have the potential to influence these risks, necessitating legal regulation.
1.2 Legal Definition of Artificial Intelligence Users
1.2.1 Usage Behavior
Artificial intelligence users, as the term implies, refer to individuals who use AI products or services. Therefore, defining “usage behavior” is critically important.
First, usage behavior typically refers to activities conducted in accordance with the usage instructions and functional purposes of AI products or services. For example, commuting to work using autonomous vehicles, generating images, text, or videos with generative AI, or monitoring workplaces using facial recognition systems. Of course, usage behavior can be either lawful or unlawful. However, if users do not utilize AI according to its intended purpose but instead treat it as an ordinary tool, they are not classified as AI users. This scenario generally falls into two categories: the first involves users not enabling AI mode, with the product or service remaining in a non-AI state. For example, a driver uses an autonomous vehicle as a conventional car without activating its autonomous driving functionality. The second involves usage that entirely contradicts the intended purpose of AI, rendering the AI incapable of functioning properly, such as using an autonomous vehicle as a camping shelter or a humanoid robot as an exhibition toy. This distinction arises because the objective of the “Artificial Intelligence Law” is to prevent and manage intrinsic risks of AI, which originate from the performance of AI products or services themselves.
Second, usage behavior emphasizes control over AI products or services rather than passive use. As a key subject regulated under the “Artificial Intelligence Law,” users’ behavior is not limited to passive interaction with AI products or services but includes active control over their usage methods and purposes. For instance, in the case of autonomous buses providing public transportation services, the user is the driver capable of controlling the autonomous driving features, not the passengers merely riding the bus. Similarly, for facial recognition systems installed in workplaces, the employer should be regarded as the user because they control the specific applications of the system, whereas employees clocking in are not considered users. The EU’s 2020 “Artificial Intelligence Liability Directive” explicitly defines users (deployers) as those who decide to use AI systems and control the associated risks and operational benefits. Control, in this context, implies the ability to influence operational methods or alter the functionality of AI systems continuously. Domestically, the “Artificial Intelligence Model Law (Expert Suggested Draft)” also recognizes this distinction by introducing the concept of “affected individuals or organizations,” effectively differentiating them from users.
Third, usage behavior must fall within the regulatory scope of the “Artificial Intelligence Law.” AI applications are diverse, and corresponding usage behaviors are equally varied; not all fall within the law’s jurisdiction. For example, the EU’s “Artificial Intelligence Law” specifies several exceptions: usage of AI systems outside the EU; activities utilizing AI systems specifically for military, defense, or security purposes; personal, non-professional usage of AI systems; and the use of free and open-source AI systems, unless such systems constitute high-risk AI. Similarly, China’s two draft suggestions for the “Artificial Intelligence Law” include corresponding provisions. For example, the “Artificial Intelligence Law (Scholar Suggested Draft)” excludes three categories of usage behaviors: first, activities by natural persons using AI for personal or household purposes; second, the use of free and open-source AI; and third, military applications of AI. This exclusion is reasonable given that military use primarily concerns national security and significantly differs in impact from general public usage. However, the personal or household use of AI and the use of free and open-source AI entail necessary considerations for risk management and benefit allocation, suggesting they should not be categorically excluded.
1.2.2 Subject Types
Users encompass a variety of types, ranging from individuals to organizations. However, the most typical subject type is the individual, specifically the end user. These users are often consumers purchasing AI products or services. It is crucial to align the definition of users in the “Artificial Intelligence Law” with subjects under other legal frameworks. For instance, the “Regulations on Intelligent Networked Vehicles in the Shenzhen Special Economic Zone” (hereinafter referred to as the “Shenzhen Regulations”) uses terms like drivers, owners, managers, and passengers, requiring discernment to identify users. According to the “Shenzhen Regulations,” conditionally autonomous vehicles and highly autonomous vehicles require drivers. Drivers qualify as users because they directly decide whether and how to activate autonomous driving mode and take over in emergencies. For fully autonomous vehicles, the “Shenzhen Regulations” stipulate that no drivers are needed in the vehicle, and the vehicle can operate without onboard personnel, but owners and managers must control the autonomous vehicles remotely or otherwise, thus classifying them as users. Passengers, lacking control over the autonomous vehicle, are not considered users but rather “affected individuals.”
Beyond individual users, organizations can also be AI users. The EU’s “Artificial Intelligence Law” lists legal entities, public authorities, institutions, and other groups. Similarly, China’s two draft suggestions for the “Artificial Intelligence Law” reference organizations in various user contexts. Broadly, organizations as users fall into two categories: first, private law organizations, typically enterprises. For example, the EU’s “Artificial Intelligence Law” distinguishes between general enterprises and small and micro-enterprises (including startups), applying differentiated regulatory measures. Likewise, China’s draft suggestions address enterprises as users. Second, public law organizations, typically government agencies. The EU’s “Artificial Intelligence Law” specifically mentions various government agencies, including law enforcement, judicial departments, and immigration, asylum, and border management agencies. The “Artificial Intelligence Model Law (Expert Suggested Draft)” uses terms like “state organs,” “government agencies,” “public institutions,” and “other organizations legally vested with public administration functions,” establishing detailed supporting rules.
1.2.3 Identity Transformation
The identity of users can be singular or composite, depending on the circumstances. First, users may simultaneously serve as developers and providers. If an enterprise develops and provides AI products or services while also using the products or services it creates, the enterprise assumes the identities of developer, provider, and user, all subject to the “Artificial Intelligence Law.” For instance, an enterprise developing autonomous driving technology, manufacturing autonomous vehicles, and offering autonomous taxi services has combined identities. Second, users may transform into new developers or providers. The EU’s “Artificial Intelligence Law” stipulates that deployers (users) are considered new providers if they meet any of the following conditions: branding or trademarking high-risk AI systems already placed on the market or provided as services; substantially modifying such systems in ways that exceed the scope of the initial conformity assessment; or altering an AI system’s intended purpose (including general AI systems) to classify it as high-risk. Once users become new providers, original providers are relieved of their provider obligations but must still offer necessary information, technical access, and other assistance to enable the new providers to fulfill their duties.
2. Protection of the Rights and Interests of Artificial Intelligence Users
Whether the “Artificial Intelligence Law” should specifically address the protection of AI users’ rights and interests is a significant issue. On the formal level, it concerns the chapter structure and framework arrangement of the “Artificial Intelligence Law”; on the substantive level, it pertains to the value orientation and governance logic of the law, requiring careful consideration.
2.1 Necessity of Stipulating “Protection of the Rights and Interests of AI Users”
Internationally, the EU’s “Artificial Intelligence Law” and the U.S. “AI Consumer Protection Act” do not include specific provisions on protecting AI users’ rights and interests. Similarly, the two proposed drafts of China’s “Artificial Intelligence Law” present differing approaches: the “Artificial Intelligence Model Law (Expert Suggested Draft)” makes no such provisions, while the “Artificial Intelligence Law (Scholar Suggested Draft)” creatively establishes a dedicated chapter on “Protection of Users’ Rights and Interests.” The author believes that future “Artificial Intelligence Law” should incorporate specific provisions addressing “Protection of the Rights and Interests of AI Users.”
First, it improves the governance structure of the “Artificial Intelligence Law.” The governance structure adopted by the “Artificial Intelligence Law” is critical and must follow a rigorous logical framework to ensure internal consistency. Formally, the future “Artificial Intelligence Law” should employ a mature “general-specific” legislative technique: the general provisions addressing legislative objectives, scope of application, and fundamental principles, while the specific provisions implement detailed legislative requirements. Substantively, constructing the rules of the “Artificial Intelligence Law” necessitates norms encompassing “powers,” “rights,” “obligations,” and “responsibilities.” The EU’s “Artificial Intelligence Law” exemplifies a typical “power-obligation-responsibility” model: power norms correspond to the regulatory matters of AI supervisory bodies, while obligation and responsibility norms pertain to the duties and legal liabilities of AI providers and deployers (users). By comparison, China’s “Artificial Intelligence Law” should incorporate rights norms. On one hand, rights norms provide a foundation for obligation and responsibility norms, with user rights protection serving as the purpose behind developers’ and providers’ obligations, the violation of which leads to legal liability, creating a more coherent logical framework. On the other hand, introducing rights norms can institutionally constrain power norms, providing a legal basis and behavioral boundaries for the exercise of regulatory authority.
Second, it addresses practical challenges in AI development. At present, China’s AI industry is generally advancing upward, with the application of AI achieving notable quantitative and qualitative progress under the “AI+” initiative. At the same time, AI applications have disrupted the existing legal order, significantly affecting industrial development. This can be examined from two dimensions: one being the destruction of the existing legal order, prominently manifesting as the infringement of established rights by AI applications; the other being the simultaneous empowerment and renewal opportunities brought about by such disruptions, giving rise to new demands for rights. Rights represent a core category in jurisprudence and serve as a crucial response to technological transformations, forming the core institutional framework of AI law and technology law. For example, the advent of the steam engine and assembly-line production led to the establishment of workers’ rights; copyright emerged to address piracy issues arising from printing technology; and the widespread application of information technologies prompted the recognition of privacy and personal information rights. Similarly, AI technology has generated new demands for rights. A dedicated chapter on “Protection of the Rights and Interests of AI Users” in the “Artificial Intelligence Law” effectively responds to the needs of practical development.
Third, it contributes Chinese wisdom to AI legislation. Currently, the global AI boom is prompting nations to compete for leadership in AI governance to establish international standards and discourse power. The United Nations has established a high-level expert advisory body, the EU has issued the world’s first “Artificial Intelligence Law,” and the U.S. has successively introduced multiple AI-related legal policies. Meanwhile, China is actively participating in international AI governance by signing the “Bletchley Declaration” and launching the “Global Initiative on AI Governance.” Clearly, the sustainable development of new-generation AI technology is not only a competition at the technological level but also a contest of legal soft power. Against this backdrop, China’s formulation of the “Artificial Intelligence Law” is highly anticipated—not only as a step toward legalizing China’s AI governance but also as a valuable opportunity to present China’s approach to AI governance to the world. By creatively establishing a dedicated chapter on “Protection of the Rights and Interests of AI Users,” China’s “Artificial Intelligence Law” can significantly distinguish itself from the governance logic of the EU’s “Artificial Intelligence Law,” powerfully demonstrate China’s consistent people-centered AI governance philosophy, and position the law as a declaration of rights protection in the digital age, contributing Chinese wisdom to global AI legislation.
2.2 Basic Framework for Protecting the Rights and Interests of AI Users
Since it is deemed necessary for the “Artificial Intelligence Law” to include specific provisions for protecting AI users’ rights and interests, the next step is to determine which rights should be stipulated. The author argues that the “Artificial Intelligence Law” should explicitly establish at least four types of user rights.
2.2.1 Right to Information
Artificial intelligence, driven by technologies such as neural networks, machine learning, and large models, possesses attributes of high intelligence but also exhibits complexity, uncertainty, and opacity. Its internal decision-making processes often resemble a “black box.” To address this, the “Artificial Intelligence Law” must first grant users the right to information to bridge the information asymmetry inherent in AI technologies. It is important to note that this right pertains not only to individual users but also to organizational users.
First, users should have access to basic information about AI products or services, including: the names, contact details, and means of obtaining information about developers and providers; the purposes, intended uses, main operating mechanisms, and potential risks of AI products or services; and the rights and remedies available to users.
Second, users should be informed about the role of AI to avoid confusion between humans and machines or cognitive dissonance. Scholars have referred to this as the “laws of identification,” requiring entities to disclose whether they possess AI capabilities. The EU’s “Artificial Intelligence Law” mandates that AI systems designed for direct interaction with humans include features that make it clear to individuals that they are interacting with an AI system rather than a human.
Third, the right to explanation is an integral aspect of the right to information. When decisions made by AI significantly impact users, they should have the right to request further explanations from providers. This enhances public trust and acceptance of AI. The U.S. White House Office of Science and Technology Policy explicitly stated in its October 2022 “Blueprint for an AI Bill of Rights”: “You should know when and how an automated system impacts you and have an explanation that is technically valid, meaningful to you, and understandable by those using and overseeing the system.”
2.2.2 Right to Control
In traditional technological narratives, the relationship between humans and machines has been one-directional, with humans controlling machines to understand and transform the world. However, AI’s autonomous decision-making capabilities differentiate it from previous machines, enabling actions without human intervention. While this autonomy liberates humans, it also generates anxieties about losing control, as machines increasingly encroach upon human decision-making authority.
To address this, laws must safeguard human control over machines. The “Ethical Norms for a New Generation of Artificial Intelligence” explicitly states: “Ensure humans have full autonomy in decision-making, the right to choose whether to accept AI services, the right to disengage from AI interactions at any time, and the right to terminate AI system operations, ensuring that AI remains under human control.” Similarly, the EU’s “Artificial Intelligence Law” requires safeguards for “effective human oversight during use.”
For AI products or services, individual users and organizational users alike should have the right to control, including the ability to demand the termination of AI systems’ operations, unless doing so is unsafe. Moreover, the right to control encompasses the option to refuse the use of AI products or services and to request decisions involving human participation from providers.
2.2.3 Right to Equality
AI applications have exacerbated severe discrimination issues, posing threats to the principles of fairness and justice. These concerns can no longer be ignored. The U.S. “Blueprint for an AI Bill of Rights” states: “There is substantial evidence that automated systems can yield unfair outcomes and amplify existing inequities. For example, facial recognition technologies may result in incorrect and discriminatory arrests; hiring algorithms may support biased decisions; healthcare algorithms may underestimate the severity of diseases in Black Americans. Discriminatory practices embedded in AI and other automated systems exist across various industries, sectors, and contexts.”
In response, the “Artificial Intelligence Law” must reaffirm and update the right to equality to address AI-induced discrimination. This right should cover individual users and vulnerable digital groups, extending beyond formal equality to substantive equality. Accordingly, the law should address biases and discrimination arising from poor data quality and flawed algorithm models related to gender, religion, age, ethnicity, or economic status. Moreover, AI products and services should consider the needs of special groups such as minors, the elderly, and persons with disabilities, achieving substantive and collective equality and bridging the digital divide.
2.2.4 Right to Data
AI development relies heavily on data, which serves as its lifeblood. Only vast amounts of data can enable AI to become increasingly intelligent and provide more targeted and personalized services. For instance, the “Global Initiative on AI Governance” emphasizes “ensuring the protection of personal privacy and data security in AI development and application, opposing the theft, alteration, leakage, and other illegal collection and use of personal information.” Article 14 of Taiwan’s “Artificial Intelligence Basic Law (Draft)” stipulates: “The personal data protection authority shall assist relevant authorities in preventing unnecessary collection, processing, or use of personal data during AI development and application and promote measures or mechanisms for integrating personal data protection into default and design to safeguard the rights of data subjects.”
Although China’s “Personal Information Protection Law” already provides comprehensive provisions on personal information protection, it remains necessary for the “Artificial Intelligence Law” to reaffirm personal information rights. This ensures that individual users are informed and able to decide on the handling of their personal information when using AI products or services. Furthermore, considering that data has become a crucial production factor in the digital era, organizational users’ data property rights should also be protected.
3. Obligatory Norms for Artificial Intelligence Users
Given the significant role users play in AI risk governance, clarifying their obligations is crucial. In setting these obligations, the “Artificial Intelligence Law” must carefully consider users’ roles and the impacts of their usage behavior.
3.1 Rationality of Establishing “Obligations of AI Users”
In practice, there are varying approaches to whether AI users’ obligations should be systematically specified. The EU’s “Artificial Intelligence Law” not only delineates providers’ obligations but also explicitly stipulates the obligations borne by deployers (users). The U.S. “AI Consumer Protection Act” similarly outlines obligations for developers and deployers (users). In contrast, China’s two proposed drafts for the “Artificial Intelligence Law” direct obligations toward developers and providers without explicitly addressing users’ obligations. The author argues that systematically specifying users’ obligations is necessary.
First, it helps uphold AI ethical principles. While AI has the potential to enhance human well-being, this relies on adherence to ethical principles of people-centered, beneficial development. Developers, providers, and users alike should be subject to the obligation of practicing AI ethics. The United Nations’ 2024 resolution on artificial intelligence, titled “Seizing the Opportunities of Safe, Secure, and Trustworthy AI Systems for Sustainable Development,” states: “Improper or malicious design, development, deployment, and use of AI systems—such as without appropriate safeguards or in violation of international law—pose risks.” Similarly, China’s “Ethical Norms for a New Generation of Artificial Intelligence” emphasizes the integration of ethical values throughout the AI lifecycle and includes a dedicated chapter on “norms for use.” To implement these principles, the “Artificial Intelligence Law” must not only embed ethical values in upstream design and development but also regulate the use of AI technology in a more responsive manner. For example, the “Artificial Intelligence Law (Scholar Suggested Draft)” general provisions state that users “should lawfully prevent and control potential ethical risks of artificial intelligence,” while the “Artificial Intelligence Model Law (Expert Suggested Draft)” requires that users “take effective measures to avoid unreasonable discriminatory treatment toward individuals or organizations.” These ethical principles need to be operationalized through users’ obligations.
Second, it aids in the prevention and management of AI risks. The intelligent society is characterized by the greatest variability in risks. Both the EU’s “Artificial Intelligence Law” and the U.S. Department of Commerce’s “AI Risk Management Framework” emphasize the importance of governing AI risks. While developers and providers primarily manage risks from the front end of AI products and services, users predominantly affect the back end—the “last mile” where AI risks manifest in reality. Compared to developers and providers, users have direct control over AI products and services, such as deciding when to enable autonomous driving, what keywords to feed into generative AI, or how to employ deep synthesis technologies, directly influencing the materialization of AI risks. The EU’s “Artificial Intelligence Law” notes: “While risks associated with AI systems may arise from the way they are designed, they may also result from the way such systems are used. Deployers are most knowledgeable about the specific use of high-risk AI systems, enabling them to identify potentially significant risks unforeseen during the development stage. Deployers also have better knowledge of the usage environment and affected populations or groups, including vulnerable groups.” Considering the critical role users play in preventing and managing AI risks, it is reasonable for the “Artificial Intelligence Law” to systematically specify users’ obligations.
Third, it facilitates the coordination of AI law norms. Clarifying users’ obligations is essential for aligning different normative frameworks under the “Artificial Intelligence Law.” On one hand, the obligations of users are logically linked to those of developers and providers, as fulfilling developers’ and providers’ obligations often requires users’ cooperation. For instance, the EU’s “Artificial Intelligence Law” states: “Deployers play a critical role in ensuring the protection of fundamental rights, complementing the obligations undertaken by providers during the development of AI systems.” Building on this, the legislation places the obligations of high-risk AI system providers and deployers (users) in the same section, requiring providers to establish quality management systems and retain automatically generated logs while obliging deployers (users) to take appropriate technical and organizational measures and maintain such logs.
On the other hand, there is a link between users’ obligation norms and responsibility norms, as the fulfillment of users’ obligations directly impacts the application of their responsibilities. The absence of clearly defined users’ obligations would undermine the enforceability of users’ responsibility norms.
3.2 Detailed Elaboration on the Obligations of Artificial Intelligence Users
The diversity of AI users, such as individuals utilizing generative AI services or organizations deploying technologies like facial recognition systems, results in varying impacts on others and necessitates differentiated obligations. The “Artificial Intelligence Law” should address these differences comprehensively.
Users are required to engage in reasonable use of AI products and services, which is fundamental to ensuring their intended functionality and preventing associated risks. Proper use involves adherence to the intended purposes outlined by the developers or providers. China’s “Ethical Norms for a New Generation of Artificial Intelligence” emphasize the importance of good-faith use, discouraging misuse and explicitly prohibiting malicious or improper applications. For instance, users should familiarize themselves with the basic operational information, instructions, and potential risks of AI systems to use them responsibly. This principle is reflected in the “Shenzhen Regulations,” which mandate that drivers of intelligent connected vehicles operate autonomous features according to user manuals and traffic rules. Additionally, certain AI products or services may require users to possess specific qualifications to operate them safely and effectively. Misuse of AI is equally critical to address, as the self-learning capabilities of AI can be exploited for harmful purposes, as evidenced by the infamous manipulation of Microsoft’s chatbot Tay. Users bear the responsibility to prevent such abuses and refrain from unauthorized modifications or unethical applications, such as using generative AI for illegal activities or engaging in acts of violence against humanoid robots.
Risk management is another crucial obligation for AI users, as their actions directly influence the potential risks associated with AI systems. Users must adopt appropriate technical and organizational measures to mitigate risks, particularly when engaging with high-risk AI systems like those used for judicial sentencing or large-scale surveillance. Individual users of low-risk AI are generally not subject to such stringent measures due to the impracticality and cost of implementation. Ensuring the quality of input data is vital, as accurate and representative data reduce the likelihood of errors. For example, healthcare professionals using AI for diagnostic purposes must ensure that patient records and imaging data are comprehensive and accurate to avoid erroneous outcomes. Furthermore, the maintenance and timely updating of AI systems are essential to address security vulnerabilities and ensure operational effectiveness. Preserving system-generated logs is equally important for tracing and investigating incidents involving AI technologies.
The autonomous nature of AI does not negate the necessity of human oversight. Users are responsible for supervising AI systems to ensure safe and ethical operation. For instance, conditional or highly autonomous vehicles require drivers to remain alert, monitor their surroundings, and intervene during emergencies, as stipulated in the “Shenzhen Regulations.” Similarly, in contexts like healthcare, where AI systems provide diagnostic support, users must critically evaluate AI-generated recommendations to maintain ethical and professional standards. The World Health Organization underscores the need for meaningful human oversight in healthcare, emphasizing that decisions should not solely rely on AI outputs.
When AI use affects third parties, users are obliged to provide transparent information. Employers deploying facial recognition systems, for instance, must inform employees about the existence of such systems, the scope of data collection, and any potential impacts. The U.S. “AI Consumer Protection Act” requires users to disclose information about the type and risks of high-risk AI systems, including details about data collection and usage practices. This obligation complements the responsibilities of developers and providers, reinforcing a comprehensive framework for protecting the rights and interests of all affected parties.
4. Liability of Artificial Intelligence Users
Legal liability is a critical component of the “Artificial Intelligence Law,” as it pertains to both remedies for victims and the freedoms of actors. Whether to stipulate liability for AI users and how to define it are matters requiring careful consideration.
4.1 Justification for Clarifying “AI User Liability”
In international practice, the EU’s “Artificial Intelligence Law” includes provisions for the administrative liability of both providers and deployers (users) but omits civil liability. Domestically, the “Artificial Intelligence Model Law (Expert Suggested Draft)” excludes users from its liability chapter, focusing solely on developers’ and providers’ legal responsibilities, whereas the “Artificial Intelligence Law (Scholar Suggested Draft)” explicitly addresses administrative and civil liability for developers, providers, and users. It is argued here that the “Artificial Intelligence Law” should define both administrative and civil liability for users.
Clarifying user liability facilitates the coherent application of laws. As key subjects under the “Artificial Intelligence Law,” users must face liability norms alongside their obligations. For example, if a user of a conditionally autonomous vehicle violates their takeover obligation, causing injury or death to other road users, they should naturally bear civil liability for their actions. Similarly, if a company deploying facial recognition systems in public spaces fails to implement reasonable organizational and technical measures to safeguard data, resulting in massive data leaks, or actively abuses such systems to monitor the privacy of unidentified individuals, administrative liability is inevitable. From the perspective of regulatory coherence, clearly defined user liability norms activate and reinforce user obligations. Additionally, risk regulation theory emphasizes the accountability relationships between regulatory organizations and external entities. Since users are obligated to prevent and manage risks, accountability mechanisms must also be in place.
Defining user liability also promotes AI industry development. According to an EU survey, liability issues rank among the top three obstacles to deploying AI in enterprises, posing significant challenges to business operations. In response, the EU established the “Expert Group on Liability and New Technologies” in 2018 to study AI liability legislation. In 2022, the EU adopted proposals for the “Artificial Intelligence Liability Directive” and the revised “Product Liability Directive,” addressing AI-related civil liability comprehensively. Similarly, China has shown significant interest in AI liability issues, ranging from traffic accident liabilities involving autonomous vehicles to medical malpractice liabilities involving diagnostic AI and the legal responsibilities of generative AI. Users, as the end-users of AI products and services and the largest and most influential group, are at the center of these concerns. If users misuse or abuse AI technologies, public trust in AI will erode, necessitating accountability to restore that trust.
Clarifying user liability aligns with the nature of AI law. While debates persist regarding whether to specify all forms of liability for AI users, it is argued here that such provisions are consistent with the interdisciplinary nature of AI law. AI jurisprudence is inherently a field-specific legal discipline, bridging various legal domains to explore the interaction between AI and the law. This interdisciplinary characteristic of AI law necessitates transcending traditional legal silos and integrating public and private law in its regulatory framework. Moreover, administrative and civil liabilities are vital components of AI risk governance, serving as essential tools for preventing and managing AI risks. Including both types of liability in the “Artificial Intelligence Law” aligns with the law’s objectives of guiding AI development along the correct path. While the EU’s “Artificial Intelligence Law” does not address civil liability, its revised “Product Liability Directive” and “Artificial Intelligence Liability Directive” rely on concepts such as AI systems, high-risk AI, providers, and deployers, which are grounded in the definitions set forth by the EU’s “Artificial Intelligence Law.” In essence, these provisions function as two sides of the same coin.
4.2 Constructing the Pathways for Legal Liability of AI Users
Public law liability encompasses administrative and criminal liabilities. In this context, the “Artificial Intelligence Law” should prioritize clarifying administrative liabilities for users, while criminal liability can be addressed through cross-referencing provisions in the “Criminal Law of the People’s Republic of China.”
The methods of administrative liability for AI users are diverse, including corrective orders, warnings, confiscation of unlawful gains, suspension or termination of relevant business activities, and fines. Among these, fines have a significant impact on users and require precise and differentiated design. For private entities, higher fines may be appropriate, calculated as a percentage of the previous year’s revenue or capped at a fixed maximum amount, whichever is higher. Public entities such as government agencies, however, are not suitable subjects for fines, as such penalties would effectively shift the financial burden to taxpayers, undermining their intended purpose. Additionally, fines for private entities should distinguish between general businesses and small or micro enterprises (including startups). The EU’s “Artificial Intelligence Law” adopts a lenient approach for small and micro enterprises, capping fines at the lower of a fixed amount or a percentage of global annual turnover. Both drafts of China’s “Artificial Intelligence Law” also emphasize factoring in the severity of violations, the user’s intent, and remedial measures when setting fines.
A compliance exemption mechanism could also be considered for users who establish and implement effective compliance systems. For instance, the “Artificial Intelligence Model Law (Expert Suggested Draft)” provides that developers and providers who violate the law may be exempted from or subject to reduced penalties if their compliance measures meet evaluation standards. Similarly, the “Artificial Intelligence Law (Scholar Suggested Draft)” incorporates compliance-based exemptions for users. Introducing such mechanisms can reduce the burden on enterprises, fostering the adoption and innovation of AI technologies. However, compliance must remain a proactive governance tool aimed at ensuring lawful operations and risk prevention. Overemphasizing administrative liability reduction risks distorting compliance into a tool for evading accountability, potentially creating ethical concerns.
Civil liability includes breach of contract and tort liability, with the latter being particularly critical for addressing public expectations. A structured approach to civil liability for AI users is essential.
A distinction must be made between product liability and application liability. Product liability primarily targets developers and providers, whereas application liability focuses on the risks associated with deploying and utilizing AI. This distinction is vital in determining users’ responsibilities. AI products, unlike traditional products, require continuous updates and maintenance, giving providers a degree of ongoing control. However, users maintain significant control over how and when AI products are utilized, influencing risk management and application outcomes. For AI services, users have less control compared to providers, who dictate the foundational infrastructure such as software, hardware, and network resources. Providers should thus be the primary liability bearers, with users holding auxiliary responsibility. Clear liability boundaries can be established through safe harbor rules.
Liability should also vary based on the risk level of the AI system. Different AI applications entail varying degrees of risk, warranting differentiated user responsibilities. The EU’s “Artificial Intelligence Law” employs a risk-based approach, categorizing AI systems by risk levels to determine corresponding obligations and liability regimes. For example, its “Artificial Intelligence Liability Directive” mandates strict liability for high-risk AI systems and presumed fault liability for others. Similarly, China’s legal framework could adopt a tiered approach: strict liability for high-risk AI, presumed fault liability for medium-risk AI, and fault-based liability for low-risk AI. This stratification aligns with principles already present in the “Civil Code of the People’s Republic of China,” which differentiates liability based on the risk associated with various objects and activities.
Lastly, the distinction between assistive and substitute AI systems should guide liability determination. Assistive AI, which supports human decision-making, keeps humans in the loop, as seen with diagnostic or judicial AI systems. Substitute AI, such as fully autonomous vehicles, replaces human involvement, removing direct user oversight. These differing relationships between humans and AI systems necessitate tailored liability rules. For substitute AI, where users lack supervisory or intervention responsibilities, fault-based liability may be infeasible. Instead, strict liability aligns better with the complete delegation of operational control to the AI system. Conversely, for assistive AI, where humans retain decision-making roles, fault-based liability remains relevant. For example, medical professionals utilizing diagnostic AI must reassess AI-generated outputs, ensuring decisions align with ethical and professional standards. This nuanced approach ensures that liability regimes reflect the varying degrees of control and participation inherent in different AI use cases.
5. Conclusion
In the face of the widespread application of artificial intelligence, ensuring its people-centered and ethical development through legislation is of paramount importance. AI legislation must first determine its scope of regulation, clarifying which entities and activities within the AI industry chain fall under its purview. Unlike developers and providers, users, positioned at the end of the AI industry chain, are both direct beneficiaries of AI’s technological advancements and frontline bearers of its risks. Moreover, users play a critical role in preventing and managing AI-related risks. Given this, regardless of the structural framework adopted by the future “Artificial Intelligence Law,” users will hold a significant position. To achieve the legislative objectives of AI regulation, the future “Artificial Intelligence Law” should move beyond the traditional provider-centric regulatory model. Instead, it should adopt a three-dimensional approach to user regulation, encompassing rights, obligations, and responsibilities. This comprehensive framework ensures that AI technologies serve the public to the greatest extent, maximizing their benefits while mitigating associated risks.