Location : Home > Resource > Paper
Paper
Zhang Linghan | The Path Expansion of Artificial Intelligence Legal Governance
2025-04-28 [author] Zhang Linghan preview:

[author]Zhang Linghan

[content]



The Path Expansion of Artificial Intelligence Legal Governance



Author Zhang Linghan

Professor at the Institute of Data Rule of Law, China University of Political Science and Law


Abstract: The legal governance of artificial intelligence is currently in the rule-forming stage. A contradiction exists between the "simplification" of risk-based legal governance approaches and the increasing "complexification" of AI governance demands. Despite efforts at localization within Chinas existing legislative practices, these refinements have yet to overcome inherent limitations. As Chinas AI governance advances into a stage of systemic integration, the focus must be on fostering a harmonious interaction between high-quality development and high-level security. Adaptive governance principles should guide efforts to balance safety and evelopment, address the governance demands arising from the complex, multi-dimensional nature of AI systems, and accommodate the uncertainties and unknowns of technological progress. On this basis, it is necessary to build an adaptive legal governance framework, system and tool box tailored to form a legal governance path that conforms to Chinas domestic institutional environment, technological and industrial foundation, and legal and policy objectives.


Keywords: artificial intelligence legislation; Artificial intelligence risks; Graded classification; Key artificial intelligence; adaptive governance

Introduction

The global governance of artificial intelligence is undergoing a significant transition from concepts, principles, ethics to the construction and implementation of legal systems. The number of mentions of artificial intelligence in global legislative processes has almost doubled from 1247 in 2022 to 2175 in 2023. In this process, the consensus that urgently needs to be formed is to clarify the adjustment objects of artificial intelligence legislation through legal systems and provide operable regulatory paths. The EU Artificial Intelligence Law, which will come into effect in August 2024, adopts a risk-based approach. As the world's most anticipated comprehensive legislation on artificial intelligence, it is having varying degrees of demonstration effects on legislation in various countries.

Although there is widespread recognition of the strategic significance of artificial intelligence among countries around the world, the choice of legal governance paths in each country is not only deeply influenced by their respective political systems, economic environments, and legal cultures, but also depends on the interactive relationship between law and other governance means such as politics, economy, and technology in each jurisdiction. The EU's Artificial Intelligence Law chooses a risk-based legal governance path mainly due to its own development positioning, current situation, and goals. Firstly, it is in line with its digital governance legal system, such as the General Data Protection Regulation which adopts a risk-based legal governance path; Secondly, the development of the digital technology industry in the European Union lags behind that of the United States and China, with strong market consumption capacity. The main target of legal governance is foreign enterprises entering the EU market. While balancing development and security, more emphasis is placed on security, with a particular emphasis on protecting EU citizens from the infringement of their basic rights caused by technology; Thirdly, in recent years, the European Union has repeatedly achieved a rule leading effect globally through digital governance legislation, and has also attempted to reproduce the "Brussels effect" through artificial intelligence legislation.

China's legislation in the field of science and technology is deeply influenced by the European Union. Should artificial intelligence legislation also choose a risk-based legal governance path? To answer this question, it is not only necessary to analyze whether the risk-based legal governance path is applicable to artificial intelligence governance, but also to make autonomous judgments based on China's institutional environment, technological industry foundation, and development goals. China is at a critical juncture in establishing a local legal system for artificial intelligence.   The Standing Committee of the National People's Congress and the State Council have included AI law related legislative projects in the legislative planning plan for many times. As a general legislative concept, risk governance must be an important part of artificial intelligence legislation, but risk governance is a general concept, and "risk-based" governance focuses on a governance approach centered around risk. Many researchers also believe that "risk-based" should be the foundation and focus of artificial intelligence legal governance. But some scholars have also begun to reflect on the drawbacks of risk-based legal governance paths.

Different from the institutional studies that focus on risk governance in these reflections, this paper will go deep into the governance concept level and propose that the risk governance concept has been unable to effectively realize the high-quality development and high-level security of China's artificial intelligence. The adaptive governance concept should be introduced into the legal governance of artificial intelligence to accommodate the high uncertainty and unknown prospects brought about by technological development. On this basis, it further proposes to coordinate development and security with the adaptive governance concept, build a systematic governance framework, optimize the hierarchical classification scheme, and upgrade the system toolbox based on the multiple attributes of artificial intelligence, so as to expand the path of artificial intelligence legal governance.

1. Theoretical and practical challenges of risk-based legal governance path

In the theory of risk society, risk is characterized by uncertainty, and the law needs to address the interconnected effects between various subsystems of human society, construct systems to respond to risk, and pursue "acceptable" safety goals. Based on this theory, the European Council proposes to follow a clearly defined risk-based path and introduce a proportional and effective set of constraint rules for artificial intelligence. The typical path of risk-based legal governance, represented by the EU Artificial Intelligence Law, is to adhere to the concept of risk governance, construct a governance framework by defining and evaluating the risks of artificial intelligence, determine the classification scheme of artificial intelligence based on the degree of risk, and configure corresponding risk management system tools. But with the deepening of institutional research and practice, this path gradually highlights some theoretical and practical difficulties.

1.1 The risk cannot cover the widespread impact of artificial intelligence

The risk-based legal governance path is designed with risk as the core of the institutional system, and cannot cover the extensive and profound impact of artificial intelligence on social production, life, and corresponding legal relationships.

Firstly, the uncertainty of risk cannot encompass the inevitable impact of artificial intelligence. Risk is not the damage that has already occurred, but what may happen. The inevitable social changes brought about by technological development do not conform to this typical characteristic. If all inevitable impacts are treated as risks, it may cause unnecessary panic among the public and result in unreasonable allocation of corresponding regulatory resources. Firstly, technology poses challenges to traditional legal governance. As defined in China's "Guidelines for the Practice of Cybersecurity Standards - Guidelines for the Prevention of Ethical Security Risks in Artificial Intelligence", the "responsibility risk" refers to the recognition that traditional liability systems cannot be applied to artificial intelligence. But this is clearly an inevitable impact that can be addressed by updating the legal system. Secondly, technology drives economic development and changes in the way social production is organized. The employment substitution brought by artificial intelligence is often discussed as a risk, but in some fields it is originally the application purpose of artificial intelligence. Finally, technology also brings about social power structures and global geopolitical impacts. Technology companies have seized power that once belonged solely to the state, and the international governance system of artificial intelligence will inevitably surpass the traditional concept of sovereign states, profoundly changing the global geopolitical order.

Secondly, the uncertainty of risk cannot cover the real danger of artificial intelligence. Risk is a concept consciously constructed based on experience to prevent potential future harm. Danger means a definite or presumptive causal relationship between the event and the damage, while risk is difficult to explain with a single linear causal relationship. Therefore, the handling method for real danger is "identification", while the handling method for risk is estimation. These types of real dangers are often more easily defined as risks in the design of legal systems, as they are aggregated and particularly difficult to trace causally to responsible persons or entities. The bias and discrimination caused by bias in artificial intelligence training datasets should be defined as real danger to avoid risk generalization leading to the dispersion of regulatory resources or the allocation of risk governance costs as a burden on technological development to other members of society.

1.2 There is a disconnect between the legal governance framework and technological characteristics

The effectiveness of artificial intelligence legal governance depends on whether it can adapt to the technical characteristics of artificial intelligence. Due to the multiple technological characteristics of artificial intelligence, it is difficult to directly transplant and adapt other risk-based legal systems based on similar technologies.

If there is a point of view, both AI and the Internet can be classified as information and communication technology (ICT), and the business type as the regulatory object is to provide "information services". In view of the low risk of Internet information services, the risks brought by AI to facilitate other service applications of users are relatively controllable, so the risk governance model of the Internet can be transplanted. For example, learning from the early governance of the Internet, the role of non-governmental institutions should be emphasized, and the public sector should delegate certain authority to the private sector for cooperative governance. However, the network ideology, key information infrastructure and other national security issues involved in the Internet are gradually emerging after it has developed to a certain scale, while artificial intelligence is different from it. Artificial intelligence has been closely related to international competition and national security since its inception, and its legal governance path is inevitably different.

Some argue that both artificial intelligence and nuclear energy are dual-use technologies that may pose a catastrophic global risk. UN Secretary General Guterres and OpenAI CEO Sam Ultraman have both proposed the establishment of an organization similar to the International Atomic Energy Agency to regulate artificial intelligence. However, the governance measures for nuclear energy risks cannot be directly applied to artificial intelligence. One reason is that the production materials and dissemination forms of artificial intelligence are intangible, especially open source models that can be easily obtained, making it difficult to be effectively regulated like the production materials of nuclear energy. The second is that the benefits brought by nuclear energy can be abandoned through risk assessment, but artificial intelligence has a wide range of applications and affects the future technological competition situation of various countries. In the absence of clear risks, it is impossible for countries to strictly restrict research and development and use.

It can be seen that the technological characteristics of AI make it not only have the universality and empowerment of the Internet, but also have an important impact on national security similar to nuclear technology as an infrastructure, which determines that its legal governance path should take into account the development and security goals at the beginning of design. In the legislative practice of various countries, whether it is the California Frontier Artificial Intelligence Model Security Innovation Act that has not been issued, or China's officially released "Interim Measures for the Management of Generative Artificial Intelligence Services" (hereinafter referred to as the "Interim Measures"), compared to the draft for soliciting opinions, it clearly enriches the institutional arrangements related to promoting development. It can be seen that legislators intend to strictly regulate for national security considerations, but also worry about the conflicting mentality and legal interests that may affect the innovation and development of the artificial intelligence industry. Therefore, risk-based similar technology governance experience may provide institutional references for similar scenarios, but it is bound to fail to provide comprehensive and effective institutional supply for the complex technical characteristics of artificial intelligence.

1.3 The grading and classification standards are vague and difficult to accurately measure

The risk-based legal governance path usually uses the degree of risk as the classification standard, but a convincing classification scheme has not yet been formed. Especially with the increasing universality of artificial intelligence basic models, the problem of incompatibility with traditional scenario based regulations is becoming more prominent.

Firstly, the connotation of artificial intelligence risks is vague and difficult to define clearly. On the one hand, the connotation of risk is not clear, and the vast majority of governance frameworks only use the concept of "risk" in a general way. The International Institute of Electrical and Electronics Engineers (IEEE) included risk reduction as one of its ethical guidelines in the "Code of Ethics for Artificial Intelligence Design" (2nd edition), but did not further define risk. On the other hand, various heterogeneous risks are mixed and difficult to generalize. The risks of artificial intelligence not only include security risks caused by technological defects, but also risks caused by unreasonable application of technology, inadequate management measures for technology users, and malicious use by humans. These heterogeneous risks come from different sources and are difficult to generalize through risk assessment systems.

Secondly, it is difficult to accurately measure the risks of artificial intelligence in accordance with legal governance requirements. Risk combines factual statements and value propositions, and artificial intelligence services and applications operate in different fields. Risk characterization involves value judgments from multiple dimensions. As pointed out by the regulatory authorities in Singapore, risks can vary greatly depending on the deployment location of artificial intelligence even within a country. Therefore, traditional risk regulation is mostly used in areas where actuarial methods are applicable, such as traffic accidents, food quality, and environmental protection. However, artificial intelligence risks are difficult to accurately measure, which leads to difficulties in the application of traditional risk regulation.

Thirdly, risk-based classification standards are generally influenced by values and political judgments. Regarding the risk classification of the EU's Artificial Intelligence Law, some scholars in the EU have bluntly stated that identifying some artificial intelligence as having unacceptable risks is essentially a value statement and political judgment without risk assessment. This is more of a symbolic gesture to demonstrate the differences between the EU, the United States, and China. Typical artificial intelligence applications with unacceptable risks, including social credit scoring systems and remote biometric applications, are clearly rooted in a consistent misinterpretation of China's relevant systems and social governance practices.

Fourthly, the universality of the basic models of artificial intelligence makes it difficult to handle the relationship between "systemic risk" and "scenario based regulation" in hierarchical classification. In the construction of legal systems such as data and personal information, scenario based regulation can accurately address specific risks and is widely established as a legal system. The logic is that the level of technical risk depends on a specific 'use case'. However, the basic models of artificial intelligence have significant universality and form a hierarchical format. One of the reasons why the California Frontier Artificial Intelligence Model Security Innovation Act was not issued is that it is difficult to handle the relationship between risk scenario regulation and the universality of artificial intelligence. Legislators bluntly state that "risk regulation cannot be separated from scenarios".

1.4 Doubts about the effectiveness of legal governance system tools

Currently, almost all institutional tools in artificial intelligence governance can be referred to as risk regulation, as various tools can be used to prevent or mitigate risks. But upon closer examination, the relevant institutional tools gradually recursively fall into two categories: one is to verify security through evaluation or assessment; Another type is to ensure that the large model aligns with the best interests of humanity during the design phase, achieving value alignment. Given that value alignment is still difficult to translate into a legal system, practice often focuses on various pre evaluation or assessment systems, such as requiring high-risk artificial intelligence systems to establish a "qualified evaluation system" before being put on the market, which has also raised many questions.

On the one hand, there is a lack of feedback mechanisms for in-process management and post event accountability systems. Simply focusing on pre risk assessment will not only make it difficult for users to obtain relief and protection of their rights, but also make the substantive standards required for pre risk assessment unable to receive feedback and become mere formalities. At the same time, current pre risk assessments are mostly led by enterprises, requiring more adherence to procedures and less reliance on substantive standards. The United States requires large enterprises to share the results of security testing with the government when developing large models above specific floating-point computing power standards. Given the historical record of large corporations selling tobacco, asbestos, leaded gasoline, and other fossil fuels to mitigate product risks, it can be foreseen that the expectation of a risk-based system primarily based on assessment and evaluation is difficult to be optimistic.

On the other hand, the effectiveness of institutional tools themselves needs to be fully verified. From representative institutional tools such as "Red Team Testing" and "Benchmark Testing", the former discovers model vulnerabilities by simulating adversarial users, while the latter evaluates performance and security through datasets of questions and answers. In fact, passing tests does not necessarily mean safety. Basic models with powerful capabilities have been found to have deceptive behavior, so using assessments to ensure safety is accused of being "like training a stubborn Nazi to never reveal his Nazi views".

In summary, the risk-based legal governance path is not compatible with the reality and complex needs of artificial intelligence governance. To build a scientific and reasonable legal governance system of artificial intelligence in China, we need to propose effective solutions to the dilemma of risk-based legal governance path. Before discussing the optimal path, it is necessary to first clarify the governance theme and legislative practice experience in China, in order to lay a solid foundation for the construction of the institutional system through localization.

2. The Theme Consideration and Legislative Practice of Legal Governance of Artificial Intelligence in China

To explore the legal governance path of artificial intelligence in China, we should not only make a generic assessment of the common risk-based path, but also make an independent analysis based on the local institutional environment and technological industry foundation. Different from legislation in the traditional field, China's AI legal governance has almost kept pace with the United States and Europe, and even achieved "leading" in some specific systems. Starting from the local considerations of governance themes and based on the risk-based legal governance path, China has made some improvements in governance concepts, governance frameworks, hierarchical classification schemes, and institutional tools to a certain extent. However, as China's AI legal governance enters the stage of system integration, its limitations have gradually emerged.

2.1 Local considerations on the theme of China's AI legal governance

The essence of artificial intelligence legal governance depends on the national political, economic, social, cultural, ecological and other institutional environments, as well as the development stage of the country and the recognition of the value of artificial intelligence. As relevant studies have pointed out, socio-economic regulation is the result of the interaction between ideological strategies and more material hard strategies. The risk-based legal governance path represented by the European Union originated from the specific political and economic system soil of the EU and has its rationality, but it cannot be simply copied to China. Based on China's institutional environment, technological industry foundation, and development goals, clarifying the main purpose of legal governance is the starting point for constructing the institutional system.

As a new type of productivity, artificial intelligence's main function, like previous technological revolutions, is to promote economic and social development and improve human well-being. Just as humans invented transportation tools such as cars and airplanes, they need to avoid risks to life and property safety while gaining convenience. The goal of artificial intelligence governance is for humans to fully enjoy the fruits of the intelligent revolution while ensuring its safe operation. Comparing various governance methods, on the one hand, legal governance and political, economic, technological and other means complement each other with their own emphasis; On the other hand, legal governance has more fundamental, stable, and predictable characteristics. The legal system plays a role in maintaining the stability of production relations, economic structure, and social order, and will inevitably change with the changes in productivity and production relations. Therefore, whether the legal governance of artificial intelligence is scientific and reasonable needs to be comprehensively considered in the overall context of the economic and social development of the country where it is located.

The EU chooses a risk-based legal governance path, emphasizing security as the main theme, mainly based on its own considerations. Firstly, it is related to its global competitive strategy. The European Commission intends to build the EU common market into a "moral" and "trustworthy" supplier of artificial intelligence products through the narrative of artificial intelligence regulation. Internally seeking to protect its citizens and businesses from losses caused by globalization, externally seeking to occupy high-quality segmented markets worldwide, and promoting its regulatory standards to form a unified market and a more favorable competitive environment. Secondly, due to the backwardness of industrial development compared to the United States and China, the legislative direction is not towards local enterprises, so adopting a risk-based legal governance path has less concern for industrial development. The European Competitiveness Report released by the European Commission shows that only 4 out of the top 50 technology companies in the world are European, while the majority come from the United States and China.

The theme of China's AI legal governance should reflect China's current perception of the value of AI. Firstly, China's unique "world leading catch-up" technology industry niche determines that legal governance must hold high the banner of development. Although China's AI capability ranks first in the world, it lags behind the United States in technological innovation and faces serious "choke" problems. There is also a significant gap between China and international technology giants in industrial development. The significant growth in artificial intelligence infrastructure and research is mainly driven by investments from the United States. The United States ranks first in the number of global artificial intelligence models, accounting for 44%, while China ranks second, accounting for 36%. At the same time, the United States is highly vigilant about China's technology industry catching up and has implemented a large number of blockade and sanctions measures for this purpose. AI has become a complex strategic issue involving China's political, economic and technological development. Lack of development is the biggest insecurity. The legal governance of China's AI is particularly urgent to promote the high-quality development of AI.

Secondly, the application of artificial intelligence has been widely involved in all aspects of China's economic and social development, closely related to political security, economic security, military security, cultural security, social security, etc., and has become an important field related to high-level security. The adjustment of the value hierarchy of artificial intelligence legislation must adhere to the overall national security concept, and integrate the purpose of people's security into all aspects of the adjustment of the relationship between legislative value elements. The construction of an artificial intelligence legal governance system requires strict adherence to safety red lines and bottom lines, full understanding and evaluation of the security risks associated with the development of the technology industry, and ensuring the safety, reliability, and controllability of artificial intelligence throughout the entire cycle before, during, and after the event.

The Decision proposes to achieve a positive interaction between high-quality development and high-level security. Around the major strategic needs of serving the country, China's legal governance of artificial intelligence should also take this as the main theme, promote and regulate the healthy development of artificial intelligence on the track of rule of law, and enhance China's global competitiveness in the era of artificial intelligence.

2.2 Early Improvement and Limitation of Legal Governance of Artificial Intelligence in China

The legal governance of China's artificial intelligence began with the Development Plan for a New Generation of Artificial Intelligence issued by the State Council in 2017, which established the goal of building a complete legal, regulatory, ethical and policy system of artificial intelligence by 2030. In the process of institutional evolution, on the one hand, China has absorbed and drawn on the experience of risk-based legal governance, and on the other hand, starting from reality, it has clarified the theme of balancing development and security, and made improvements in governance concepts, governance frameworks, hierarchical classification schemes, and institutional tools. The following distinctions can be roughly made from the legislative practice process:

During the exploration phase from 2017 to 2021, China established the development of technology legislation with equal emphasis on security based on three relevant laws: the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law. It also established a legal regulatory system for artificial intelligence related elements such as network infrastructure, data, and personal information. At the same time, a dual mainline governance framework for cutting-edge technology and service applications has been established. The governance of technology is mainly based on ethical norms, and the "New Generation Artificial Intelligence Ethical Norms" are formulated. Taking service applications as the main object of legal governance of artificial intelligence (algorithms), the Administrative Provisions on Recommendation of Algorithms for Internet Information Services was issued, which comprehensively stipulated five types of algorithm service applications, including the generation of composite, personalized push, sorting and selection, retrieval and filtering, and scheduling and decision-making.

In the targeted stage from 2022 to 2023, China has further clarified the theme of emphasizing both the development of artificial intelligence legal governance and security, and the governance framework is becoming increasingly clear and perfect. Represented by the Administrative Provisions on the Deep Integration of Internet Information Services and the Interim Measures, and focusing on the security of information content, the framework of generative AI hierarchical governance has begun to emerge by promoting legal governance from the service application to the technical level. In addition, institutional norms related to specific application scenarios such as smart justice and autonomous driving have been successively introduced, a standard system for artificial intelligence technology has been gradually established, and local legislation has emerged one after another.

With the rapid development of the technology industry and the further complexity and urgency of governance needs, since 2024, China's legal governance of artificial intelligence has entered the system integration stage. The Decision specifies the improvement of the development and management mechanism of generative artificial intelligence and the establishment of an artificial intelligence safety supervision system. The 2024 legislative work plan of the Standing Committee of the National People's Congress included the "legislative project on the healthy development of artificial intelligence and other aspects" in the preliminary review item, and the 2024 legislative work plan of the State Council proposed that "the draft of the artificial intelligence law should be submitted to the Standing Committee of the National People's Congress for review". This marks the acceleration of the construction of a legal governance system for artificial intelligence, with the main theme of achieving high-quality development and high-level safety and benign interaction, led by specialized legislation on high-level artificial intelligence, relevant laws, administrative regulations, and departmental rules as the backbone, international rules as the synergy, and ethical rules and technical standards as supplements. In this context, compared with the risk-based legal governance path, China's AI legal governance path has been optimized and improved in the early stage, but the inherent limitations are still emerging.

Firstly, in terms of governance philosophy, although emphasizing the equal importance of development and security, risk management is still taken as the foundation. The development clauses in the existing legal system for artificial intelligence are mostly principle based and encouraging clauses, with a strong policy color. Taking the "Interim Measures" as an example, although the officially released text has significantly increased provisions to promote development, such as "encouraging innovative applications of generative artificial intelligence technology in various industries and fields", it is not a rigid rule that is easy to implement in practice. Compared to others, clauses related to information content security, network security, data security, and personal information protection are more systematic and provide clear arrangements for rights and obligations.

Secondly, although a hierarchical governance framework for cutting-edge technologies and service applications has been formed in terms of governance framework and its underlying attribute cognition, a comprehensive understanding of the complex system and complex underlying attributes of artificial intelligence governance has not yet been formed. For example, China's AI Security Governance Framework 1.0 focuses on clarifying the security requirements of AI endogenous technology and service application dimensions. At the same time, although the "Interim Measures" propose to "promote the construction of generative artificial intelligence infrastructure" and build public data pools, public computing power and other elements, expanding the understanding of the attributes of artificial intelligence to infrastructure, these understandings are not yet systematic and cannot effectively solve the problems of "risks cannot cover a wide range of impacts" and "the disconnection between legal governance framework and technical characteristics" in the risk-based legal governance path.

Thirdly, in terms of the classification scheme, although it incorporates risk-based classification ideas, such as separately regulating generative synthesis algorithms with high information content security risks in the five types of algorithm service applications, and thus forming a unique "ecological governance" proposition in institutional exploration, as well as more comprehensive classification standards that include service application risks, combined with data dimensions (such as data importance) and subject dimensions (such as user size), these standards still mainly focus on the application attributes of artificial intelligence services, and are mixed with classification standards such as data, algorithms, subjects, and scenarios. It is urgent to form a comprehensive and systematic classification scheme.

Fourthly, in terms of institutional tools, although an evaluation and transparency system tailored to local characteristics has been established, it still needs to be further improved. Although China has absorbed institutional tools such as risk governance assessment and evaluation, and made localized improvements, such as making technology ethics review a prerequisite in research and development, requiring disclosure of training data sources, and taking the lead in setting mandatory labeling and prompting obligations for generating composite content on the service side before the European Union and the United States, the unknown prospects of artificial intelligence technology development bring complex governance needs, and it still needs to enhance the ability of institutional tools to respond to systemic risks and uncertainties.

It can be seen that the improvement of risk-based legal governance path in China's early stage is not yet able to meet the increasingly urgent and complex needs of artificial intelligence governance. On the basis of systematically summarizing legislative practice experience, it is necessary to expand the concept of risk governance, deepen the understanding of the attributes of artificial intelligence, and lay a theoretical foundation for the systematic construction of an artificial intelligence legal governance system.

3. Expand the theoretical basis of China's AI legal governance path

Therefore, to expand and optimize the legal governance path of artificial intelligence in China, on the theoretical basis, on the one hand, it is necessary to upgrade the risk governance concept to an adaptive governance concept, adopt flexible and dynamic strategies to deal with complex and uncertain problems, and better achieve the governance theme of coordinating the development of artificial intelligence and security; On the other hand, it is necessary to scientifically understand and distinguish the multiple attributes of artificial intelligence in order to clarify the systematic foundation of AI legal governance and address the complex impact of AI's widespread and deep embedding in various aspects of social operation.

3.1 Integrate development and security with the concept of adaptive governance

The legal governance of artificial intelligence should comprehensively introduce the concept of adaptive governance to solve the difficulties in applying the concept of risk governance, which is in line with the governance theme of achieving high-quality development and high-level safe interaction of artificial intelligence in China.

As an emerging governance concept, the essence of adaptive governance is to adjust governance strategies according to changes in the external environment and improve the system's adaptability. This concept originated from ecology and was later expanded to address highly complex social ecological systems issues. Due to the diversity, uncertainty, and rapidly changing nonlinear characteristics of participants in these issues, especially with the emergence of various "black swan" events in the world today, adaptive governance mainly focuses on studying how to make decisions under high uncertainty.

The concept of adaptive governance has the following characteristics: firstly, aiming for the sustainable development of highly complex social ecosystems. Ecological resilience is proposed with the aim of improving the system's ability to withstand disturbances. Subsequently, the concept of social ecosystem emerged. When the system has characteristics such as complexity and multi-layered nesting, emphasizing only the resilience of the ecosystem or relying solely on a single force for environmental governance is not sufficient. Therefore, a more flexible governance concept is needed that can be applied to various complex and specific situations, promoting the sustainable development of the social ecosystem. Secondly, it highly accommodates uncertainty and unknown prospects. Ecological adaptive governance aims to address changes in ecological baseline conditions and increased environmental uncertainty. Extending to the field of public governance, it refers to the ability of governance entities to respond to uncertain changes in the external environment and solve the problem of traditional management methods being unable to cope with changes. Thirdly, dynamic and flexible adaptability. Adaptive governance is based on its ecological foundation, emphasizing the flexible adjustment of governance strategies according to changes in the external environment to adapt to new situations and needs, and maintain the "ideal state" of the social ecosystem. It has the advantages of flexibility and dynamic adaptation, and can continuously maintain learning, adaptation, and optimization.

As mentioned earlier, the development of artificial intelligence has typical characteristics of complexity, unpredictability, and rapid change. If we continue to adhere to the concept of risk management, it will be difficult to adapt to the reality and complex needs of artificial intelligence governance. Therefore, the concept of adaptive governance should be fully introduced, which is more in line with:

Firstly, the concept of adaptive governance emphasizes achieving sustainable development of complex systems. One is that adaptive governance focuses on handling complex problems through flexible strategies. With the deep embedding of artificial intelligence into social operations, complex systems with characteristics such as complexity, uncertainty, and multi-layered nesting can be formed. Compared with risk governance, it can provide more suitable solutions for the "complexity" needs of artificial intelligence governance. The second is that the underlying logic of the risk governance concept is that the insecurity caused by risks requires comprehensive intervention from the state. Therefore, security issues dominate public decision-making, replacing development issues as the focus of social attention, and also become the connection point between risk society theory and the legal system. The concept of adaptive governance will adopt a trial and error, inclusive, and incentive based flexible regulation, focusing on possible future development and functional changes based on technological characteristics. It shares similarities with the concept of "flexible governance" proposed in the field of public management, with the common goal of ensuring full vitality and creativity in the economy, society, and technology while ensuring safety.

Secondly, the concept of adaptive governance emphasizes accommodating uncertainty and unknown prospects, expanding response capabilities based on a risk-based legal governance framework, and leaving ample space for technological development. The risk-based legal governance framework is mainly constructed based on known risks and clear rules of rights and obligations, while the innovation capability and iteration speed of artificial intelligence far exceed the coverage of existing governance frameworks, constantly giving rise to new applications, scenarios, and risks, making it difficult to cope with the incremental risks and unknown prospects of artificial intelligence. These risks may not be fully identified and evaluated during the design and testing phases, and therefore require continuous monitoring and management in practical applications. More importantly, artificial intelligence legal governance also needs to face the "unknown unknowns" brought about by the development of artificial intelligence technology, which are events beyond human experience and imagination. In other words, although the best strategy is still to prevent damage from occurring, in situations where risks cannot be eliminated and technological development cannot be abandoned, strengthening the adaptability of legal governance should be the best choice. Through institutional arrangements, proactively reserve development space for unknown prospects of technological development and establish a safety bottom line.

Finally, the concept of adaptive governance emphasizes improving the adaptability of governance tools. The main idea of risk governance is to revise technological innovation and application activities, with the premise that if a certain type of technology has inherent defects, it cannot be directly applied to practical activities until its risk reaches an "acceptable" level. Therefore, based on the revisionist approach, its institutional design focuses on imposing prohibitive or restrictive conditions before the application of technology. The adaptive governance concept focuses on shifting from pre prevention to minimizing the degree and duration of damage and ensuring quick recovery afterwards, providing institutional tools that meet adaptability, flexibility, and inclusiveness, thereby breaking the inherent limitations of the revisionist approach to risk-based legal governance.

From the above, it can be seen that introducing the concept of adaptive governance into artificial intelligence legal governance is not simply denying the concept of risk governance and the risk-based legal governance path, but is in line with the localization governance theme of "achieving high-quality development and high-level safety benign interaction", and is optimized and expanded on its basis. It not only solves the mismatch problem between the simplification of risk-based legal governance paths and the complexity of artificial intelligence legal governance needs, but also extends to incremental risks and unknown prospects that are not yet covered by risk-based legal governance paths. Therefore, the comprehensive introduction of adaptive governance concept in artificial intelligence legal governance will maintain a balance between the stability of legal governance and the flexibility of emerging technology governance, and thus respond to the comprehensive opportunities and challenges brought by artificial intelligence in the face of the increasing complexity and uncertainty factors in the development of the technology industry.

3.2 Clarifying the Systematic Foundation of Legal Governance with Multiple Attributes of Artificial Intelligence

Based on the concept of adaptive governance, to establish the legal governance framework of China's AI in the system integration stage, it is also necessary to scientifically understand and analyze the legal attributes of AI, and clarify the systematic basis of legal governance.

The connotation of artificial intelligence is vague, mixed, and highly open, often constantly adjusted according to the narrative theme. In the construction of the legal governance system for artificial intelligence, the understanding that artificial intelligence is a purely cutting-edge technology should be abandoned, otherwise it will be alienated into a narrative style that worships technological innovation, refuses regulation, and hides the true cost of artificial intelligence. In fact, when artificial intelligence technology is connected with economic and social systems, it reflects both the practice of economic and social development and the political and economic institutional environment on which it relies. The EU's Artificial Intelligence Law "risk-based approach" classifies the risks of artificial intelligence products and basic rights risks, and specifically lists the risks of general artificial intelligence systems, resulting in a breakdown in the classification logic; The cutting-edge AI systems that the US AI Executive Order focuses on are more based on national security and key infrastructure security considerations. As mentioned earlier, although China's legislation has established a hierarchical governance framework for cutting-edge technology and service applications, and has initially recognized the role of artificial intelligence in infrastructure, it has not yet formed a comprehensive understanding of its complex attributes, nor has it been internalized as the basis for institutionalized construction.

From the perspective of the attributes of artificial intelligence infrastructure, it is related to national development strategies and resource allocation. Traditional infrastructure has the characteristics of being fundamental, empowering, and public. Infrastructure such as transportation, energy, and water conservancy have always been regarded as the foundation of economic development and should be prioritized for construction. In the era of digital economy, facing the demand for high-quality development, the demand for infrastructure in social production has also undergone structural changes. On the one hand, artificial intelligence has the empowerment function of infrastructure, and its large-scale models can support the generalization and intervention of multiple vertical industries, providing enterprises with callable capabilities and the public with lower cost knowledge resources and creative tools. On the other hand, artificial intelligence is not the green and energy-saving technology commonly recognized by the public, but rather an energy intensive infrastructure composed of natural resources, electricity, human resources, logistics, and other resources. The carbon emissions from computing infrastructure worldwide are already comparable to those of the aviation industry and are growing at a faster rate. Artificial intelligence infrastructure also requires high upfront capital investment, which has made it a "game for a few" and has led to industrial research surpassing that of the scientific community. Even countries with "sovereign AI" are among the very few. Therefore, the research and development, application, and regulation of artificial intelligence have generally risen to the national strategic level in various countries. It can be foreseen that artificial intelligence, as an infrastructure, will continue to generate monopolies in the future, leading to issues such as how to allocate data element resources and computing power resources at the public level, which also need to be taken into account in the legal governance of artificial intelligence.

At a higher level, artificial intelligence is also a new form of social production organization, which promotes highly centralized and organized resources, and will profoundly change the future way of work, economic structure, and even the exercise of social power. Artificial intelligence reorganizes and allocates human resources, knowledge resources, and other resources. Seemingly automated artificial intelligence actually requires a massive number of low wage workers to develop, maintain, and test, such as industrial supply chain work, on-demand click to work, data annotation work, etc. All stages of the artificial intelligence assembly line involve a large amount of "manually driven automation". In scientific research and knowledge production, artificial intelligence has put forward the demand for aggregation of various large-scale databases, public data, and domain data in the form of training data. As an organizational mode of social production, artificial intelligence is accelerating the absorption of various resources such as manpower, knowledge, capital, and energy. It has become the new type of "giant machine" discussed by Mumford and Marx, and the organized and mechanized technical operation mode of the entire society. In the future, artificial intelligence will form a highly centralized social production and have a more profound impact on human society. Correspondingly, the issues of ownership of means of production, sharing of production risks, and labor substitution brought about by the future transformation of social production organization also require a systematic response from legal governance.

Technical principles may be neutral, but technology itself is not neutral. Artificial intelligence has long been a part of the normative order due to its material properties, rather than a purely neutral existence. The analysis of multiple legal attributes based on artificial intelligence can further understand the risk perception differences caused by different national values. As public opinion polls show, the concerns of people in both the East and the West are more consistent about the risks of errors, false information, discrimination, and other risks that artificial intelligence may cause; However, Western people tend to hold skeptical views on the development and application of artificial intelligence, while Chinese people are generally optimistic. This is because at the service application layer, the perception of errors and false information among people from both the East and the West tends to converge, becoming a common institutional requirement for all countries. However, in the overall attitude towards artificial intelligence, due to its attributes of both infrastructure and social production organization, the West's concerns about the development of artificial intelligence largely stem from the increasing concentration of economic power in the hands of private enterprises, with tech giants holding the most cutting-edge technology and deeply influencing the direction of policies and laws. However, so far, the industry's main response has been to sign ethical statements on artificial intelligence, and ethical governance has been widely questioned due to its non mandatory soft rules. In the development of artificial intelligence in China, the government not only bears the responsibility of promoting industrial development, but also has the ability to effectively regulate it. Therefore, the public holds a more optimistic attitude towards the public issues of important infrastructure and production organization methods in the future.

The multiple legal attributes of artificial intelligence grow organically and intertwine in different dimensions. It is necessary to comprehensively deepen the understanding of the multiple attributes of "cutting-edge technology, service applications, infrastructure, and social production organization methods" of artificial intelligence, especially to distinguish the attributes of infrastructure and social production organization methods. Under the coordinated concept of adaptive governance, they should serve as the premise and foundation for establishing a scientific governance framework, hierarchical classification scheme, and institutional tools.

4. Moving towards adaptive artificial intelligence legal governance

With the concept of adaptive governance as a whole, and based on the multiple attributes of AI, a more advantageous legal governance path of AI in China is imminent. Compared to risk-based legal governance paths, adaptive legal governance achieves a systematic and dynamic expansion and upgrading of the entire process, which helps to form a legal governance system that reflects high-quality development and high-level security, and adapts to China's institutional environment, technological industry foundation, and development goals.

4.1 Building an adaptive and systematic governance framework

An adaptive legal governance framework needs to conform to the characteristics of widespread impact and interwoven attributes of artificial intelligence, and achieve systematic design of the governance framework through clear attribute boundaries and clear governance goals. If there is a lack of attribute dimensions in the governance framework, it often leads to mismatches with institutional tools due to positioning deviations, resulting in institutional limitations where the medicine is not targeted. The main reason why the draft of the "Interim Measures" for soliciting opinions in the early stage has been questioned is that the information content security governance goals under the service application attribute have been overly allocated to generative artificial intelligence with cutting-edge technology and infrastructure attributes. Subsequently, the official publication of the "Interim Measures" fully absorbed opinions from all parties, especially the limitation of the scope of application of regulations in its second article, which clearly reflects the institutional stance of hierarchical governance and differentiated attributes for standardization, and has been widely recognized by all parties. Therefore, to coordinate the multiple attributes of artificial intelligence with the concept of adaptive governance, the first step should be to establish a clear and hierarchical systematic governance framework based on "dialectical treatment".

Firstly, in the attribute dimension of artificial intelligence as a cutting-edge technology, legal governance should promote technological progress and prevent extreme situations such as uncontrollable risks. Scientists propose that the development of artificial intelligence must prevent self replication and self evolution. Therefore, the corresponding adaptive governance goals should face up to the relatively backward situation of China in the competition between China and the United States, not only following the characteristics and laws of technological innovation, promoting technological iteration and development, but also regulating technological research and development activities to follow technological ethics and ensure technological safety.

Secondly, in terms of the attribute dimension of artificial intelligence as a service application, legal governance should focus on promoting its innovative application in various industries and fields, and supporting the emergence and development of new business forms and models. However, at the same time, full consideration should be given to the homogeneity of the root causes of related hazards, such as common reasons such as immature technology and low data quality, as well as the characteristics of legal relationship changes in corresponding new formats and models, in order to provide strong institutional guarantees for national security, public safety, and the legitimate rights and interests of citizens.

Thirdly, in terms of the attribute dimension of artificial intelligence as infrastructure, legal governance should focus on promoting investment construction and fair and efficient utilization. With the strong promotion of the "Artificial Intelligence+" strategy determined by the Central Economic Work Conference, artificial intelligence as an infrastructure has gradually become a consensus. Considering the construction of artificial intelligence infrastructure, it is inevitable to highly concentrate production factors and occupy a large amount of public property such as data, computing power, and energy. At present, adaptive legal governance should adjust intellectual property related rules, public data access and application rules in a timely manner, establish a system for the circulation and utilization of public data, and ensure that infrastructure construction obtains sufficient resources. After the scale is formed in the future, a more fair and efficient legal system for anti-monopoly and anti unfair competition should be dynamically constructed, such as requiring AI companies that meet the standards to exercise quasi public management power reasonably.

Fourthly, in terms of the attribute dimension of artificial intelligence as a social production organization, legal governance should face the adverse effects brought about by social changes such as labor substitution and wealth concentration. Artificial intelligence is accelerating the substitutability of labor functions, the concentration of labor organizations, and the emergence of a new intelligent working class, which will give rise to the concentration of means of production, social wealth, and technological power at a deeper level. Different governance goals and legal systems reflect the positions and choices of different political and social systems. The legal governance on this attribute dimension is essentially about choosing how to allocate the development costs of the social production transformation stage. The "high-quality development" emphasized in China's AI legal governance goals means that we are more concerned about "quality" than "development". Therefore, the scientific setting of product responsibilities, strengthening the protection of workers' rights and interests in new industries, and even further strengthening human rights protection in the AI era should be fully considered in the system design.2Improve the adaptive hierarchical classification governance scheme

Unlike the risk-based classification governance scheme that mainly uses risk level as the classification standard, improving the adaptive classification governance scheme can consider expanding the risk level standard, combining ability, influence, and attribute dimensions to form a composite classification standard, and then comprehensively evaluating and distinguishing key artificial intelligence from general artificial intelligence. Based on this, more comprehensive and strict rights and obligations norms can be created for key artificial intelligence.

Focusing on the theoretical construction of key artificial intelligence, in terms of conceptual connotation, it is not equivalent to high-risk artificial intelligence, but rather an artificial intelligence with powerful capabilities, significant influence, and key value. In terms of institutional extension, it can include the following types: firstly, artificial intelligence systems that meet certain types and standards of capabilities. Secondly, artificial intelligence service applications that play a core role in important scenarios and may have a significant impact on personal rights and interests such as life, freedom, and dignity. In the judicial field, it should include artificial intelligence service applications that assist in making trial references but not provide case retrieval functions. Thirdly, artificial intelligence applied to the core functions of critical information infrastructure. Compared to the risk-based classification governance scheme, the artificial intelligence classification governance scheme based on composite standards, especially focusing on distinguishing key artificial intelligence, will achieve innovative optimization in the following aspects.

Firstly, enrich the grading and classification standards, and coordinate multiple grading and classification standards such as data, algorithms, subjects, and scenarios in China's legal system. Firstly, the risk implies a negative evaluation of artificial intelligence, which makes it difficult to demonstrate the positive role of AI in promoting economic and social development and enhancing human well-being, and also easily triggers negative public perception. Currently, legislative practices in various countries have been reflected upon and revised to varying degrees. The concept of "high impact artificial intelligence" is proposed in the Basic Law on the Development of Artificial Intelligence and the Establishment of Trust passed by South Korea at the end of 2024. Although its connotation is similar to the EU's definition of high-risk artificial intelligence systems, it does not use the concept of risk level and a single standard. Secondly, China's technology governance system already has multiple hierarchical classification standards such as data, algorithms, subjects, and scenarios. Based on composite standards, the artificial intelligence hierarchical classification governance scheme is based on different attribute dimensions and can be organically coordinated with relevant standards. For example, it is more scientific and reasonable to use key AI to correspond to important data or core data, key information infrastructure, Internet information services with public opinion attribute or social mobilization capability, etc., than to include all AI systems with high capability and versatility into the high risk level based on the risk level standard.

Secondly, adapting to the development of artificial intelligence technology and hierarchical business models, addressing the inherent systemic risks and the mismatch between scenario based regulations in risk level classification standards. The powerful artificial intelligence technology has raised concerns in society about its loss of control. In 2024, dozens of artificial intelligence experts signed the Beijing International Consensus on AI Security, stating that "no artificial intelligence system should be able to replicate or improve itself without explicit approval and assistance from humans. But obviously, artificial intelligence that may lose control belongs to a very small number of cutting-edge technologies. Based on composite standards, it is possible to evaluate the cutting-edge technological attributes and service application attributes of artificial intelligence separately, avoiding overemphasizing the risk of technological loss of control in different attributes, which may lead to negative evaluations of the overall development of artificial intelligence and hinder its progress. In terms of cutting-edge technological attributes, ensuring safe and controllable research and development in key areas, and firmly adhering to safety bottom lines, can leave enough space for high-quality development. In terms of service application attributes, through scenario based regulation, it is possible to avoid obstacles to the normal and reasonable development of the artificial intelligence technology industry.

Thirdly, in the dimension of cutting-edge technological attributes, capabilities should be included in the classification criteria to avoid setting a one size fits all floating point computing power threshold that restricts technological development. Both the EU AI Law and the US AI Administrative Order impose stricter legal obligations on AI systems that reach a specific floating point computing power threshold. The reason is that high computing power means high capacity and correspondingly higher risk of runaway, while runaway risk comes from a specific ability (such as self replication and self evolution), not necessarily from high capacity. If a specific floating-point computing power threshold is set, it is similar to prohibiting teenagers from growing taller to prevent injury, which will inevitably inhibit the healthy development of artificial intelligence technology. Given the non-linear growth of artificial intelligence capabilities, controlling only one ability does not affect the development of other abilities. Therefore, the evaluation of artificial intelligence capabilities should be based on scientific consensus. Prohibit or impose stricter legal controls only on artificial intelligence capabilities that may pose a risk of losing control, such as self replication, self evolution, emotional manipulation, etc.

Fourth, comprehensively assess risk and return across attribute dimensions to form a holistic measurement mechanism for development and security. The hierarchical classification governance scheme based on risk level often focuses on the dimension of service application attributes to measure risks, which makes it difficult to form a comprehensive understanding of risk and return. For example, the privacy risks and vehicle safety risks of autonomous vehicle have been mentioned frequently, which has built up the public's main understanding of the autonomous driving industry, and also made responding to public concerns become the main decision-making basis and supervision focus of the regulatory authorities. However, it promotes the advancement of artificial intelligence technology in the dimension of cutting-edge technology attributes, reduces safety accidents caused by human driving in the dimension of service application attributes, promotes the construction of "vehicle road cloud integration" infrastructure in the dimension of infrastructure attributes, and generates employment substitution and overall efficiency improvement in the dimension of social production organization. Therefore, a comprehensive measurement mechanism needs to be used to consider the above impacts together. This can break away from the simplistic perception of emphasizing risks and promote the formation of rational judgments and scientific decisions that balance development and safety.

4.2 Configure an adaptive system toolbox

There is a close connection between the legal governance framework and the choice of institutional tools. From risk-based legal governance to adaptive legal governance, with the upgrading of governance frameworks, there is a corresponding need for the optimization of institutional toolboxes.

There is often a misunderstanding between the industry and the public regarding legal system tools: "Governance is strict management", and only by leaving "blank spaces" or "letting bullets fly for a while" can the system promote development. The one sidedness of this viewpoint lies in the fact that unclear institutional tools may allow actors to cross the legal bottom line. Once serious consequences occur, it may lead to a sharp tightening of legal constraints, which in turn affects the good ecology that promotes the development of the technology industry on a larger scale. The institutional toolbox includes rules with varying degrees of enforcement, and although "soft rules" such as ethics and technical standards are occupying an increasing proportion in artificial intelligence governance, it is generally believed that rigid rules are the most noteworthy essence among them. The innovation of the adaptive institutional toolbox lies in adjusting the allocation ratio of the toolbox, increasing the types of institutional tools, and adjusting the relationship between institutional tools to achieve the organic integration of rigid legal systems and governance goals, emphasizing the resilience of institutional tools to expand the capacity of legal governance to cope with high uncertainty and unknown prospects.

Firstly, scientifically adjust the allocation ratio of pre rigid rules in the institutional toolbox. If too many rigid rules are chosen in advance based on the legal objectives of security governance to ensure compliance with legal requirements, it may lead to the erroneous judgment that 'high enforceability equals effectiveness'. Therefore, to configure an adaptive institutional toolbox, the first step should be to reduce the proportion of rigid rules in advance as a whole. If we refer to the launch of China's generative artificial intelligence big model to provide services, we did not adopt a pre licensing system, but adopted a filing system that focuses on information collection. Secondly, a 'backup regulatory plan' needs to be configured to not activate the backup regulatory plan when there are no serious issues with the new technology. Even if serious problems arise, it may be considered to suspend operation instead of implementing prohibitive measures. If we learn from the "pause update" measures in China's network information governance, we can consider setting up a system for initiating and restoring the "pause operation" of artificial intelligence, and avoid restricting the development and application of technology through flexible means. Finally, in addition to establishing a fault tolerance mechanism for regulatory targets, a fault tolerance mechanism for regulatory authorities should also be established. Because imposing strict accountability for potential decision-making errors by regulatory authorities may lead them to become cautious and hesitant to try new regulatory methods.

Secondly, we should focus on leveraging the effectiveness of "bottom line prevention and control" tools, and correspondingly reduce the scope of application of other rigid rules in advance and during the process, thereby reducing the compliance obligations of enterprises. Given that the innovation capability and iteration speed of artificial intelligence far exceed existing risk awareness, it is necessary to focus on leveraging the resilience characteristics of "bottom line prevention and control" tools, namely the ability to minimize the degree and severity of harm when problems arise, as well as the ability to restore normal system operation afterwards. Typical contingency plans that are required by law include but are not limited to emergency plans, redundant systems after damage occurs, and pre installed "kill switches" that can block technology. Compared to emergency plans for risk management that mainly target specific areas of identified risks, the "bottom line prevention and control" system plan includes identified risks, incremental risks, unknown prospects, etc., providing a fault-tolerant mechanism and strengthening emergency braking. If we can refer to the redundant systems in the nuclear power industry, it is mandatory to require certain key artificial intelligence to set redundant safety measures, even if some of these measures fail, the degree of damage can be minimized.

Thirdly, actively create "law-abiding incentive" tools, adjust the relationship between law-abiding incentive tools and rigid obligations, and establish incentive systems such as "commitment and return" within a reasonable range. Considering the innovation goals and practical needs of technology and industry, "compliance reduction" can be achieved on the premise that the subject actively fulfills various compliance obligations. If the legal requirements for monitoring and warning sharing of security incidents are implemented, and the artificial intelligence security incident reporting system is implemented, it can be further stipulated that if relevant parties actively report, share, and take timely measures, their post event responsibilities can be appropriately reduced. For example, improving transparency tools that have received widespread attention in risk governance, and guiding the development of basic models and service providers to adopt more transparent technologies through institutional design, as well as using transparency obligations as a discretionary factor in determining their legal responsibilities.

In addition, given that institutional tools based on risk assessment often lack feedback loops between in-process management and post responsibility, adaptive legal governance tools should be configured. Post feedback of legal governance should also be achieved through responsibility systems and rights remedies to strengthen collaboration between institutional tools. An effective accountability system can provide users with relief channels and promote developers, deployers, and operators of artificial intelligence systems to strengthen risk prevention in the front-end. However, the lack of effective accountability can lead to the failure to translate pre - and in-process management standards into tangible results. In the current legal system, the restoration of damages mostly relies on individual lawsuits based on tort liability and product liability. Therefore, on the one hand, it is necessary to establish and improve public feedback mechanisms such as administrative accountability, and use criminal accountability as a backup guarantee; On the other hand, it is necessary to clarify the allocation of responsibilities among value chain actors such as developers, deployers, operators, and users of artificial intelligence systems, in order to achieve rapid feedback in the management loop and safeguard individual rights remedies.

Unfinished comments

The path of artificial intelligence legal governance should not only comply with the local institutional environment, technological industry development foundation and goals, but also meet the complex needs of artificial intelligence governance, and effectively respond to the unknown prospects of technological development. This article proposes from a more macro perspective that artificial intelligence legal governance not only regulates the development of cutting-edge technologies and the operation of service applications, but also adjusts the functions of artificial intelligence as an infrastructure for the collection and distribution of production materials and social resources. It also needs to respond to the profound impact of artificial intelligence as a social production organization on the social power structure and operational order. In this sense, the effectiveness of artificial intelligence legal governance largely affects the future direction of economic and social development, and choosing the path of artificial intelligence legal governance also means choosing the corresponding future.

Based on this, artificial intelligence, as a representative of new quality productivity, is closely integrated with geopolitical factors in today's world. Countries have accelerated the formulation and introduction of AI related legislation based on the governance needs of different global competitive ecological niches, striving to learn from the "Brussels Effect" to lead international rule making and seize the leading role in global AI governance. However, the current risk-based artificial intelligence legal governance path in various countries faces theoretical and practical challenges, and China cannot solve its inherent drawbacks through localized improvements. Therefore, it is necessary to trace back to the source, introduce the concept of adaptive governance to coordinate development and security, build an adaptive systematic governance framework, improve the hierarchical classification scheme, and configure the system toolbox, so as to realize the expansion and upgrading of China's AI legal governance path. It should be noted that adaptive artificial intelligence legal governance is not omnipotent. Although it can address the systemic limitations of risk-based legal governance paths in a targeted manner, specific issues such as precise measurement of risk return still need to be further explored. Playing a leading role in legal academic research and promoting the construction of an adaptive artificial intelligence legal governance system is not only conducive to achieving a positive interaction between high-quality development and high-level security of artificial intelligence, but also promotes China's greater influence in global artificial intelligence governance.