[author]Chen Liang
[content]
Chen Liang: The Dilemma and Way Out of the Systematization of Artificial Intelligence Legislation
*Author: Chen Liang
Professor at the School of Artificial Intelligence Law, Southwest University of Political Science and Law
Introduction
Currently, China is in a critical period of transformation from the network era to the intelligent era. Disruptive artificial intelligence technologies, represented by autonomous driving, AIGC, intelligent recommendation, etc., not only inject strong impetus into the transformation of the digital society, but also bring huge challenges to the governance of algorithm black boxes, the elimination of digital divide, and the guarantee of stable operation of artificial intelligence and other new social problems. These social issues exhibit characteristics of intersection, integration, and dynamism, making it difficult for traditional legal departments to cover them. Artificial intelligence, as a complex system, often involves a wider range of organizations, personnel, and social backgrounds, making the traditional concept of "fairness" often vary with individual cases, and its specific meaning is difficult to grasp and classify. This greatly goes beyond the linear, balanced, and simple reconstruction of social relationships based on traditional departmental legal relationships, making it inevitable for traditional departmental laws to encounter awkward situations when dealing with artificial intelligence social problems.
Governments of various countries have developed a considerable number of policies, regulations, rules, and normative documents in the field of artificial intelligence to overcome the shortcomings of traditional departmental law of "individual combat". Unfortunately, a considerable portion of these regulations still stem from the inherent concepts and adjustment ideas of traditional departmental laws, presenting fragmented and scattered non systematic characteristics as a whole, and making it difficult to fully address a series of complex and systematic social problems caused by artificial intelligence. Changing the artificial intelligence legislative model of "treating headaches and feet with pain" and formulating a comprehensive artificial intelligence legal and regulatory system with a systematic thinking method has quietly become the trend of artificial intelligence legislation in various countries. In contemporary China, where the wave of codification is approaching, the dilemma and way out of systematizing artificial intelligence legislation has become a topic that the legal community and legislative bodies have to study.
1. The Era Significance of Artificial Intelligence Legislation Systemization
1.1 The systematization of artificial intelligence legislation is an important measure in response to national strategies
“The Development Plan for the New Generation of Artificial Intelligence” proposes a "three-step" strategy for the construction of artificial intelligence legal and regulatory systems. It is planned to build a complete artificial intelligence legal and regulatory system in three steps over a period of 10 years, in order to work together with the ethical and policy systems of artificial intelligence and form a three-dimensional and collaborative artificial intelligence governance model. Firstly, the legislative system of artificial intelligence has become a key hub of the national governance system of artificial intelligence. The "three-step" strategy is essentially a process of continuously improving the systematic level of artificial intelligence legislation, enriching institutional content, and improving legislative quality; Secondly, based on the principle of "agile governance", systematic legislation needs to be able to map the entire lifecycle of artificial intelligence products and services, and run through the entire chain of artificial intelligence governance, in order to carry the ethical value of science and technology and provide national enforcement protection; Finally, artificial intelligence legislation cannot be separated from technology policies and ethical norms to deliver more timely and market-oriented legislative materials, in order to respect the development laws of artificial intelligence, promote innovation in artificial intelligence scenarios, and coordinate with legislative supervision, thereby enhancing the social adaptability of field legislation. Therefore, only by observing the systematization of artificial intelligence legislation within the framework of the "policy legislation ethics" trinity of artificial intelligence governance system, can its "problem awareness and domain characteristics" be highlighted, and thus promote the continuous optimization of the artificial intelligence national governance system.
1.2 The systematization of artificial intelligence legislation is the essence of scientific legislation
The substantive interpretation of the new provisions in the Legislative Law makes "systematization" a necessary part of scientific legislation, and "legislative systematization" has a clear legal basis and specific legal judgment standards since then. From the perspective of legislative technology, this constitutional law regards "code compilation" as an important form of scientific legislation, and the systematicity is where the vitality of the code lies. From the perspective of legislative content, the measurement standards for scientific legislation can be clearly defined, and the most direct manifestation of the systematization of legislation is "systematicity"; "Integrity" not only requires the formation of organic connections between different legislations, but also the maintenance of meaningful connections between internal elements of legislation; "Synergy" requires legislative bodies to carry out organic cooperation and collaborative legislation based on the legislative system, or to legislate a package of laws and regulations that have overall significance and context. Although the Legislative Law does not explicitly state "legislative systematization" literally, in essence, "legislative systematization" has become a basic requirement of scientific legislative principles, thereby forming basic constraints on China's artificial intelligence legislative model.
1.3 The systematization of artificial intelligence legislation is an inevitable requirement for building an independent knowledge system
The formation of independently innovated legal knowledge in the field of artificial intelligence in China requires systematic legislation on artificial intelligence to provide material support, in order to feed back the practice of the rule of law with a holistic approach as the doctrine of artificial intelligence law moves towards independence and maturity. In other words, the construction of the discipline system of artificial intelligence law requires systematic legislation to provide a clear and orderly "knowledge directory"; The perfect theoretical system of artificial intelligence law requires systematic legislation to form a scientific and coordinated "knowledge carrier"; The formation of the academic system of artificial intelligence law requires a systematic field of legislation to present a "knowledge context" of legal changes; The construction of the discourse system of artificial intelligence law requires a systematic field of legislative transmission with logical coherence and persuasive "knowledge expression"; The artificial intelligence legal education system is complete, requiring a systematic field of legislative responsibility that is in line with practice and a highly professional "knowledge intermediary". In the sense of "knowledge power", if we want to achieve sufficient incentives for the development of the artificial intelligence industry, reasonable regulation of artificial intelligence application scenarios, and effective arbitration of disputes related to artificial intelligence through the rule of law, the systematization of artificial intelligence legislation naturally becomes a logical prerequisite for generating legal knowledge and shaping cognitive patterns.
1.4 Institutionalization of artificial intelligence legislation is the correct choice to follow the trend of the times
In the field of artificial intelligence rule of law, strengthening the systematization of legislation conforms to the global trend of systematic governance in artificial intelligence. The European Union, the United States, the United Kingdom, Brazil, and others have recognized the necessity of systematized legislation and put it into practice. The overall legislative trend shows three major tendencies: firstly, reflecting on the potential drawbacks of non systematic legislation in the field of artificial intelligence, such as "market fragmentation", "conflicting compliance requirements", and "uncertainty in accountability"; Secondly, it emphasizes the need for specialized legislation on artificial intelligence to provide a legal channel for the implementation of national policies or the translation of ethical principles; The third is to formulate legislation in the framework field to ensure that its content remains moderately open to the future. It can be seen that strengthening the systematization process of legislation in the field of artificial intelligence and correcting the shortcomings of the lack of systematic legislation in the field of artificial intelligence through specialized framework legislation has become the trend of artificial intelligence rule of law.
2. The main challenges in systematizing artificial intelligence legislation
The global artificial intelligence legislation, including China, generally includes "data rules regulating data usage", "AI rules specific to specific AI applications or application domains", "general AI rules applicable to the widespread application of artificial intelligence", "application specific non AI rules (applicable to specific activities but not to artificial intelligence)", and "general, cross domain non AI rules" at different levels. These legislation as a whole have shown deficiencies such as unclear concepts of artificial intelligence, inaccurate legal norms for artificial intelligence, unclear legislative concepts for artificial intelligence, and unclear boundaries of the scope of artificial intelligence legislation. It is urgent to accelerate the process of systematization.
2.1 The connotation of the concept of artificial intelligence is unclear
There are few clear definitions of the essence and unique attributes of artificial intelligence in the normative texts related to artificial intelligence in our country. Most documents only provide scattered feature description words or type attribution words in sentences related to artificial intelligence, making it difficult to accurately explain what artificial intelligence is. Although the legislation and national standards for interpreting artificial intelligence terminology in Shenzhen and Shanghai have specifically established defining provisions to reveal the meaning of artificial intelligence, the three documents focus on the concepts of "behavior theory", "ontology", and "epistemology" respectively. There are differences in the characteristics of artificial intelligence in terms of "methods", "structure", and "autonomy standards", and there is no consensus on whether to list the extensions/types of artificial intelligence.
2.2 Inaccurate qualitative analysis of legal norms for artificial intelligence
The lineage positioning and normative attributes of artificial intelligence norms in legislative practice and academic research are divided into different camps under the theory of "binary distinction between public law and private law". For example, a typical intellectual legal act - using facial recognition technology to process personal information - is included in the civil legal relationship of "information processors infringing on the personal information rights and interests of natural persons" in the sense of private law by judicial interpretation. For another example, when describing the characteristics of AI legislation, scholars have included the protection of personal information in the Network Security Law, the Criminal Law, the regulation of the Anti monopoly Law on "using data and algorithms to implement monopoly", and the administrative norms in the fields of intelligent connected vehicles, AI medical care, Internet information service algorithms, and in-depth synthesis algorithms into AI legislation, and these norms show a significant "public law" attribute. In addition, some scholars have qualitatively defined artificial intelligence legislation beyond the "public-private dichotomy", believing that it is a "domain law" norm or a "risk legislation with a core focus on risk control and accountability" norm. What are the essential characteristics that distinguish artificial intelligence legal norms from non artificial intelligence legal norms? The legal community and practical departments have not yet reached a satisfactory consensus.
2.3 The legislative concept of artificial intelligence is hidden but not clear
There is a lack of consistent basic concepts above the norms of artificial intelligence to integrate various values, and there is also a lack of decision-making basis for resolving conflicts and balancing interests between different values. Similarly, for "high-risk" artificial intelligence, the Shanghai Artificial Intelligence Regulations strictly enforce entry barriers, setting two thresholds: "list based management" and "compliance review", and clearly stipulating that the substantive standards for compliance review are "necessary, legitimate, and controllable"; However, the Shenzhen Artificial Intelligence Regulations only require "pre evaluation" and "risk warning" in terms of procedures, without directly defining the "prohibited zone" of artificial intelligence types from a physical perspective, and relatively speaking, the market access space is relatively large. On the one hand, artificial intelligence governance requires "inclusive sharing," "open collaboration," and "shared responsibility." A relatively unified entry threshold is conducive to preventing market fragmentation, promoting cross regional resource sharing, and reducing the cost of responsibility allocation in regional collaborative governance; On the other hand, artificial intelligence governance also requires "safe and controllable" and "agile governance", and setting high entry thresholds in advance is also a powerful means to upgrade security measures and actively prevent risks. Therefore, the differentiated provisions in admission standards between the two regulations not only take into account the particularity of local resources and development directions, but also reflect deep-seated issues that are difficult to balance between different values. They can only temporarily "compromise" and focus on a single value, and the root cause is that the current legislation has not established a fundamental concept that can reasonably integrate various values and a reasonable method to effectively resolve value conflicts.
2.4 The scope and boundary of artificial intelligence legislation are unclear
The norms related to artificial intelligence exhibit a volume distribution pattern in terms of "effectiveness level" and "object type", with "more informal legal sources than formal legal sources, and formal legal sources dominated by departmental regulations and local legislation", as well as "the most norms related to network elements, more norms related to data elements, followed by the overall norms related to artificial intelligence, fewer norms related to algorithm elements, and the least norms related to computing power elements". From the perspective of "normative content", online legislation focuses on regulating the social risks that are "amplified" after being extended or migrated from real space to online space; Data legislation focuses on maintaining data/information security, incentivizing the development of the big data industry and promoting the development and utilization of public data on this basis; The legislation for artificial intelligence products focuses on promoting the upgrading of the intelligent manufacturing industry and supporting the intelligent connected vehicle industry, while there is less legislation related to "general artificial intelligence" and other application scenarios; In the field of algorithms, there are mainly departmental regulations regulating "deep synthesis services" and "algorithm recommendation services", but there is a lack of formal legal sources related to "computing power/cloud computing". That is, the norms that encourage innovative applications of large models and improve computing power infrastructure are basically "red headed document" policy guidelines. Currently, there is no specialized legislation in this field. In addition, regulations related to artificial intelligence also exist in legislation that does not directly mention "intelligence or its elements" in the name, but certain provisions involve the object, such as Article 18 of the E-commerce Law that regulates "recommendation algorithms" (big data killing). Such provisions are scattered in laws and regulations such as the Cybersecurity Law, the Personal Information Protection Law, and the Data Security Law, and are generally scarce in quantity. Overall, there is currently no clear criterion for determining whether various legal norms related to artificial intelligence elements and their overall levels and contents can be included in the artificial intelligence legislative system.
3. Identification standards for the systematized objects of artificial intelligence legislation
A prerequisite issue that must be addressed in the systematization of artificial intelligence legislation is to roughly define the specific scope of legal materials that artificial intelligence laws should regulate, that is, to arrange numerous norms and facts that should be placed in a specific order when carrying out systematic operations within the scope of legal work. The reality is that the legal community and practical departments often confuse the broad legal norms related to artificial intelligence with the true legal norms of artificial intelligence as the systematized object of artificial intelligence legislation, to the extent that artificial intelligence law has become a comprehensive "hodgepodge". This not only fails to contribute to the construction of China's independent knowledge system of artificial intelligence law, but also makes it difficult for China's artificial intelligence legislation, law enforcement, judiciary, and compliance to comply with the law. To select the true legal norms of artificial intelligence from the broad range of AI related legal norms, it is necessary to go through three layers of screening: "the legal connotation of artificial intelligence", "the essential attributes of AI legal norms", and "the conceptual requirements of AI legislation", in order to achieve the effect of "seeing gold after washing away all the sediment".
3.1 The Legal Connotation of Artificial Intelligence - The First Filter for Identifying Systematic Objects
Artificial intelligence does not directly refer to "science or discipline". Firstly, artificial intelligence is an information processing "system", and "programs, models, and machines" are the software and hardware components of the system. The scientific research or technological development of intelligent systems is called artificial intelligence science, which includes knowledge, principles, methods, and technologies of artificial intelligence. After the transformation and production of scientific achievements, the artificial intelligence industry will be formed. As a form of "rights", artificial intelligence is subject to specific policy backgrounds and legal theoretical discourse, and has a strong evaluative color and logical postposition, which cannot be used as a term to define the legal concept of artificial intelligence. Secondly, in terms of conceptual connotation, it is necessary to combine functional definition patterns and occurrence definition patterns in order to achieve both "goal control" and "behavioral control" of "rational agents" simultaneously. In other words, artificial intelligence is an information system that relies on computing infrastructure, processes input data through control system algorithms, embeds it into the system in multiple integrated ways such as software or hardware, and outputs it, or directly outputs a simulated state of human rational function in specific scenarios, interacts in the environment, and undergoes feedback correction under target constraints, ultimately completing preset tasks. The legal essence of this information system is a "rational agent", which is the main basis for defining its legal status. The legal structure of this definition mainly includes: basic elements such as data, algorithms, software/hardware, goals/tasks, feedback, outputs, etc. that mutually influence and constrain each other, as well as various endogenous or explicit behaviors of these elements under the principle of "system control theory" (such as "black box", "integration", "functional simulation", "transformation", etc.). Third, as far as the concept extension is concerned, it includes not only "intelligent robots" that imitate biological humans in appearance but integrate software and hardware in essence, but also simple intelligent systems (such as ChatGPT) or hardware devices (such as intelligent sensors), as well as other "intelligent bodies" embedded in these software and hardware (such as autonomous vehicle).
From this perspective, artificial intelligence is not a single technology, but an organic system with data, networks, algorithms, and other core elements, and an input-output control structure. In this system, the three core elements of data, networks, and algorithms interact and influence each other, jointly determining what kind of artificial intelligence to develop. The legal connotation of artificial intelligence has become the first filter for the systematization of artificial intelligence legislation. The numerous broad legal norms related to artificial intelligence, filtered through this filter, at least data law norms, network law norms, and algorithm regulation norms should become the initial objects for the systematization of artificial intelligence legislation.
3.2 The essential attribute of artificial intelligence norms - the second filter for identifying systematic objects
The fundamental purpose of legal regulation on artificial intelligence is to remedy, reduce or prevent damage caused by artificial intelligence, and the type of damage here determines the essential attributes of legal norms on artificial intelligence. The damage caused by artificial intelligence can be viewed from different perspectives: (1) From the perspective of the causative agent, it can be any element of the artificial intelligence system. In other words, damage can originate from both data and networks; It can be derived from both algorithms and artificial intelligence products/finished products. (2) From the perspective of the target of harm, it may be individual damage or group damage, either individual damage or group damage. (3) From the perspective of the time of harm, it may be either actual damage that has already occurred or potential damage that has not yet occurred. (4) From the perspective of the complexity of the harm caused, it can be either simple damage or complex damage.
In terms of the types of damages for relief, the artificial intelligence law has a different division of labor compared to other departmental laws. The artificial intelligence law does not provide relief for all damages caused by artificial intelligence and its various elements, but only for potential systemic damages caused by artificial intelligence systems. The relief of these damages often faces the dilemma of collective action, which cannot be solved through market methods and has to resort to regulatory laws aimed at relieving systemic or social damages. The damage caused by artificial intelligence is very similar to the type of damage that environmental law seeks to remedy, which determines that the legal norms of artificial intelligence, like environmental legal norms, have essential attributes such as "pre emptive, systematic, and regulatory".
The essential attribute of legal norms for artificial intelligence is the second filter for the systematization of artificial intelligence legislation. Through the filtering of this filter, only those artificial intelligence legal norms that take the responsibility of relieving potential systemic damage and have the characteristics of "pre emption, systematicity, and regulation" are the objects of artificial intelligence legislation systematization. This excludes legal norms related to artificial intelligence aimed at relieving individual damage in reality. For example, private law norms in data law aimed at protecting personal privacy or achieving data self-determination are not specific objects of artificial intelligence legislation systematization.
3.3 The Conceptual Pursuit of Artificial Intelligence Legislation - The Third Filter for Identifying Systematic Objects
The pursuit of the concept of artificial intelligence legislation is the final filter to identify the systematized objects of artificial intelligence legislation. The determination of the conceptual pursuit of artificial intelligence law is of great significance.
From the current policies and regulations related to artificial intelligence released domestically and internationally, as well as theoretical research and legal practice, it can be seen that the development of responsible artificial intelligence has reached a broad consensus in the international community. This concept is a proposal or normative statement on how artificial intelligence should be developed, deployed, used, evaluated, and governed. It is composed of a series of basic principles that ensure transparency, responsibility, and ethics in artificial intelligence, covering all the conditions that need to be met throughout the entire life cycle of artificial intelligence. It can prevent or reduce adverse consequences in the application of artificial intelligence to the greatest extent possible, promote the use of artificial intelligence technology to be consistent with public expectations, organizational values, and social legal norms, and is therefore favored by countries around the world. In summary, "Responsible artificial intelligence social governance is a forward-looking social governance, expected governance, and full process governance aimed at addressing risks in the development and application of artificial intelligence. The key to building a responsible artificial intelligence social governance system is to establish a sense of community responsibility and achieve a transformation from individual responsibility to shared responsibility." Therefore, the concept of "developing responsible artificial intelligence" should play a guiding role in anchoring systematic objects and balancing value conflicts at the top of the value system. This concept, as the fundamental value pursuit of artificial intelligence legislation, not only embeds the value judgment criterion of "coordinating the development and governance relationship of artificial intelligence", but also the eight specific principles proposed in the "Principles of Artificial Intelligence Governance" start from the development direction of "promoting common human welfare, promoting innovation in artificial intelligence" and "ensuring human rights, fairness and justice, and other responsible aspects". When explaining the content of the principles, it further reveals the dialectical relationship between the two. The development of responsible artificial intelligence should be based on the premise of ensuring human safety, systematically preventing risks throughout the entire development chain, and fairly distributing responsibilities after harm occurs.
In summary, the broad legal norms related to artificial intelligence, filtered through the first filter, only data law norms, network law norms, and algorithm regulation norms are left within the systematized objects of artificial intelligence legislation; Through the filtering of the second filter, the artificial intelligence legal norms with "pre emptive, systematic, and regulatory" characteristics in data law norms, network law norms, and algorithm regulation norms are once again screened and included in the artificial intelligence legislative systematization object; Through the filtering of the third filter, only the data law norms, network law norms, and algorithm regulation norms that pursue the concept of "developing responsible artificial intelligence" and possess the characteristics of "pre event, systematic, and regulatory" become the ultimate objects of artificial intelligence legislation systematization. Only the artificial intelligence legal norms selected through these three filters can form a scientific concept, clear scope, theoretical self-sufficiency, logical consistency, and complete system of artificial intelligence legal norms.
4. The Implementation Path of Artificial Intelligence Legislation Systemization
Legislative systematization is an essential method for legal professionals to regulate legal materials, construct legal systems, ensure the unity of legal order, and systematically solve social problems. The ideal state of systematic legislation can be distilled as "consistency in the internal logic of legislative elements", "the legislative context can maintain overall significance and relevance", and "the legislative value is expressed in a standardized manner.". These three standards are also important criteria for judging whether the legal system is scientific and reasonable. The construction of the artificial intelligence legislative system should be based on selecting the systematized objects of artificial intelligence legislation, guided by systematic general methods, and focus on solving "internal system" problems such as "how to set legislative values that can integrate all norms", "what kind of clues to systematically connect various internal sectors of the system", and "how to highlight the normative characteristics of artificial intelligence in specific rules". On this basis, "external system" problems such as legislative models should be considered.
Firstly, we should continue to adhere to the value foundation of "developing responsible artificial intelligence" as the integration of all artificial intelligence legislation, and then concretize it into two basic principles: "balancing fairness and efficiency" and "balancing safety and innovation". We should play the systemic function of "undertaking and interpreting the value implications of the concept of abstraction upwards and transmitting them downwards to specific principles". Considering the differences in deployment fields and lifecycle stages in the application of specific principles, it is necessary to return to basic principles in legislation to limit their application methods.
Secondly, the legislative system of artificial intelligence can only interpret the meaning of life relationships under a certain legal concept through the discourse of the rule of law, if it is adapted to the specific social structure carried by artificial intelligence. In artificial intelligence systems with an input-output control structure as the core, elements such as networks, data, and algorithms interact and constrain each other, jointly determining the quality of the output model. Therefore, the "system control theory" mechanism that can demonstrate the "essence of things" of artificial intelligence should become the link connecting the various elements of artificial intelligence, and then the "network law", "data law", and "algorithm regulation" should be integrated into the three main sections of the artificial intelligence legislative system that are interrelated and complementary to each other.
Once again, when designing specific systems or rules for artificial intelligence legislation, attention should shuttle back and forth between normative content and legislative objectives. By measuring behavior patterns, legal consequences, rights/obligations construction, risk allocation, and degree of interest protection, further examination should be conducted to determine whether the norms possess the "risk law" characteristics of artificial intelligence legal norms such as "pre emptive, systematic, and regulatory", and to explore the value judgments and scope of their effects contained in them. Only in this way can we truly "understand" the significance of legal norms. After this process, the relevant specifications of "data method", "network method", and "algorithm regulation" that were initially screened out are filtered twice, and the true legal norms of artificial intelligence can be selected. Based on this, a scientific concept, clear scope, theoretical self-sufficiency, logical consistency, and complete system of artificial intelligence legal norms can be formed.
Finally, in terms of the overall external relationship of artificial intelligence legislation, the current legislative model is difficult to adapt to the needs of the transformation of an intelligent society, and the underlying logic of continuing departmental laws to "divide and govern" makes it difficult for artificial intelligence legal norms to find a place to live in the existing departmental legal lineage. A framework and inclusive artificial intelligence law should be formulated at the legal level in a timely manner. This law should have characteristics such as comprehensiveness, guarantee, boundary, and unity, and play the commanding role of the basic law at the top of the pyramid.