[author]Zhang Jiyu
[content]
Concept establishment and mechanism construction of empowering artificial intelligence governance
Author Zhang Jiyu
Associate Professor of Law School and Future Law Institute of Renmin University of China
Abstract: AI governance has become a frontier issue and an important area of national and social governance. However, there is an urgent need to strengthen capacity building in various aspects, including AI technological innovation, risk prevention and control, corporate self-regulation, government supervision, social supervision and international cooperation, etc. Therefore, enhancing the capacity for safe and trustworthy AI development must be prioritized as the foremost task in AI governance, and the conceptand mechanisms of ‘enabling AI governance’ should be established. To achieve this goal, we should adhere to the core concepts of enabling AI governance which are human-centeredand development-oriented, as well as the derived basic concepts, including AI for good, inclusiveness and prudence governance, agile governance, and sustainable development. An enabling AI governance mechanism centered on the rule of law should be constructed, along with specific mechanisms under the rule of law, such as perfecting the mechanisms of integrating legal governance and technological governance, establishing co-governance mechanisms that promote communication and collaboration among diverse stakeholders, constructing ‘safe harbor’ mechanisms suitable for AI development, building agile and interactive dynamic regulatory mechanisms that incentivize the development of AI for good, and constructing social security mechanisms such as AI safety insurance.
Keywords: AI; empowering governance; rule of law; people-oriented; development-oriented
1 Introduction
Since the 21st century, artificial intelligence technology has developed rapidly and is leading a new round of industrial revolution. It has increasingly become a major strategic technology that determines national competitiveness and national security. At the same time, its widespread use has also brought new risks and challenges to mankind. This makes artificial intelligence governance an important part of the national governance system. The "Decision of the Central Committee of the Communist Party of China on Further Deepening Reforms and Promoting Chinese-style Modernization" (hereinafter referred to as the "Decision") adopted by the Third Plenary Session of the 20th CPC Central Committee pointed out that "improving the development policies and governance systems for strategic industries such as new generation information technology, artificial intelligence, aerospace, new energy, new materials, high-end equipment, biomedicine, and quantum technology" should be improved. How to scientifically and effectively govern artificial intelligence, what kind of governance concept to establish, what kind of governance mechanism to build, and what kind of governance pattern to form are undoubtedly the questions of the times and the world that the legal and scientific and technological communities must answer. On July 1, 2024, the 78th United Nations General Assembly unanimously adopted the resolution on strengthening international cooperation in artificial intelligence capacity building proposed by China, and more than 140 countries participated in the co-signing of the resolution. This fully reflects the importance of current artificial intelligence capacity building. In this regard, this article starts from the judgment that no development is the biggest insecurity and insufficient development is the biggest hidden danger, and points out that the security and trustworthiness of artificial intelligence is a capability that needs to be built on a high level of development. In view of the current situation where capacity building is urgently needed, the concept of "empowered artificial intelligence governance" is proposed.
Based on the theoretical basis of empowered artificial intelligence governance, this article will point out the core concepts and basic concepts of empowered artificial intelligence governance, and propose to improve a new mechanism for empowered artificial intelligence governance with the rule of law as the core. The combination of new concepts and new mechanisms for empowered artificial intelligence governance will inevitably form a new paradigm and pattern of artificial intelligence governance.
2 Theoretical Basis for Empowering AI Governance
Emerging technologies, led by artificial intelligence, are vigorously promoting the iteration and upgrading of various fields of society and will play a vital role in the process of China's modernization. Against this background, the 2024 "Government Work Report" clearly proposed a work plan for the "AI+" action. Artificial intelligence will be increasingly widely used in all aspects of social life and national governance. As the application fields and influence continue to expand, the potential risks of artificial intelligence systems are gradually becoming more prominent. This puts forward the dual needs of promoting development and ensuring security for artificial intelligence governance. The operation of modern society and the development of people are based on trust, and trust is based on confidence in the reliability of the system. Society needs to lead and ensure the safe and reliable development of artificial intelligence.
In the critical historical period of entering the intelligent era, human society needs to form an operating capability that matches the times. However, the current innovation and development capabilities of artificial intelligence enterprises, risk identification and prevention capabilities, government supervision capabilities, social supervision and correct application of artificial intelligence capabilities, and international cooperation capabilities are all obviously insufficient. There is a lack of sufficient information and mature governance experience, and social governance faces new challenges. It should be noted that such challenges are inevitable in the early stages of the development of artificial intelligence, a subversive and strategic technology that leads the industrial revolution. Capabilities such as innovative development and risk prevention and control cannot be obtained out of thin air, and need to be actively built during the development process.
It can be said that the main contradiction in the current development of artificial intelligence is the contradiction between the huge demand for the safe and reliable development of artificial intelligence and the insufficient development and governance capabilities of artificial intelligence. Faced with this complex scenario, it is necessary to innovate the concept and mechanism of artificial intelligence governance, and to address the capacity gap in the safe and reliable development of artificial intelligence as the key issue to be addressed in governance. Therefore, when answering what kind of artificial intelligence governance mechanism my country needs, we must first clarify the current governance goals, that is, to enable the safe and reliable development of artificial intelligence through scientific governance, so that artificial intelligence technology and social governance capabilities can be improved simultaneously, and ultimately enable society to establish the ability to operate well in the era of artificial intelligence, so that artificial intelligence can truly serve the happiness of the people and the improvement of human well-being. Therefore, this article proposes to construct "empowering artificial intelligence governance" that focuses on improving social capabilities. The specific reasons include the following aspects:
2.1 The development stage of artificial intelligence
In the dynamic process of social and technological change, there is an interaction between scientific and technological development, governance model and normative concepts. The determination of the governance model needs to grasp the laws of science and technology, economic laws and governance laws, and conform to the characteristics and needs of the development stage. The current stage characteristics of my country's artificial intelligence development have put forward basic requirements for the artificial intelligence governance model.
First, my country's artificial intelligence development is at the forefront of the world, and there is a lack of mature governance experience that can be used for reference. In the past, my country's development in many fields was later than that of developed countries, and it could refer to the information experience generated in practice abroad to make scientific decisions. However, as my country's artificial intelligence technology has entered the ranks of the world's leaders, from "following" to the current "running side by side" and even "leading" in some aspects, there is almost no mature governance experience that has been tested in practice to learn from. "Traditional policy tools have entered the information 'blind spot', which is a serious challenge for the government to better play its role." In this case, it is necessary to adhere to the inclusiveness and universal empowerment of new forms and models of artificial intelligence, upgrade governance models and governance capabilities, and strengthen the ability to obtain risk information, governance mechanisms and energy efficiency information in a timely manner.
Secondly, artificial intelligence is still in a stage of rapid development, and unpredictable breakthroughs may occur, which makes it difficult for traditional static governance mechanisms based on sufficient information to effectively adapt to the development of artificial intelligence. Just as the development of artificial intelligence based on large models at the end of 2022 challenged the EU Artificial Intelligence Act, which had already formed a high degree of consensus at the time, and had to be revised to add a chapter on the management requirements for general artificial intelligence models. This highlights the limitations of the bill at the time and reflects the lack of ability of relatively static governance mechanisms to respond to technological development. Therefore, it is necessary to build a more agile and dynamic governance mechanism to enhance the ability to respond to rapidly changing scientific and technological development.
Thirdly, although artificial intelligence has made significant progress, it is still in its early stages of development as a whole. The scientific and technological community still does not have a thorough understanding of the phenomenon of "intelligent emergence" of large models, and the development of technologies used to ensure the safety of artificial intelligence and the alignment of ethical values is even more inadequate. On the one hand, this stage characteristic determines that it is impossible to require artificial intelligence to reach the ideal safe and reliable state immediately. On the other hand, it also reminds us that we cannot only evaluate it based on the current development of artificial intelligence, but should conduct systematic research and judgment with a dynamic and developmental perspective, especially attaching importance to and promoting the continuous development of technical capabilities of scientific research groups and artificial intelligence companies that master advanced technologies, so as to continuously solve the risks in development.
Finally, although my country's artificial intelligence development is at the forefront of the world, it still lags behind the United States. The past 100 years of history have made us deeply realize the significant impact of scientific and technological development on the fate of the country. In the face of the possible impact of the development of international artificial intelligence technology, the high development of its own technology is the fundamental ability to maintain national competitiveness and national security. Therefore, my country must attach importance to empowering the development of artificial intelligence and improving security capabilities.
In summary, the stage characteristics of the current development of artificial intelligence in my country determine the need for empowering artificial intelligence governance. The past model of static rule-making and governance based on relatively sufficient information by learning from the experience of advanced countries is difficult to adapt to the current needs of artificial intelligence governance. On the basis of inclusive and inclusive empowerment, we should develop agile and dynamic governance mechanisms and enhance governance capabilities, especially focusing on enabling the simultaneous development of artificial intelligence compliance technology and regulatory technology to continuously improve security capabilities.
2.2 Opportunities and challenges in the development of artificial intelligence
Artificial intelligence is a strategic technology that will lead the future. Many countries are actively seizing the strategic opportunities of artificial intelligence development and building first-mover advantages in artificial intelligence development. In 2017, my country issued the "New Generation Artificial Intelligence Development Plan" (Guofa [2017] No. 35) to guide the construction of an innovative country and a strong country in science and technology. Under the scientific decision-making of the CPC Central Committee and the planning and deployment of the State Council, my country's artificial intelligence industry has developed rapidly and is in a good period of strategic opportunities. But at the same time, we must also see that my country's artificial intelligence development still faces a series of challenges. The United States is still in a leading position in the new round of artificial intelligence development. According to a research report released by Stanford University, the United States' investment in artificial intelligence in 2023 will rank first in the world, nearly 8.7 times higher than China. Since the new generation of artificial intelligence is still in its early stages of development, there is great uncertainty in its innovative development and application. It depends on multiple factors such as data, algorithms, computing power, industrial ecological environment, business environment and social application environment. Artificial intelligence investment is bound to be a high-risk activity. Only when the expected return is higher than the expected cost, rational investors may invest in innovation. If the cost-benefit measurement of artificial intelligence innovation activities lacks stable and good expectations, companies will lack confidence in important investments and R&D, and face the choice of whether to invest funds in artificial intelligence technology or other projects. Business environment factors such as supply chain accessibility and stability, market entry barriers, intellectual property protection and competitive environment are all institutional elements and social conditions that market entities need to consider when formulating plans. They are a comprehensive reflection of factors such as national governance capabilities, institutional mechanisms, and social environment.
General Secretary Xi Jinping emphasized that "the field of science and technology is the field that needs continuous reform the most" and "the most urgent thing to promote independent innovation is to break down institutional barriers and maximize the liberation and stimulation of the huge potential of science and technology as the primary productive force". From a domestic perspective, there are still some institutional problems that restrict the development of artificial intelligence. First, in terms of data aggregation and utilization, there are still problems in the current legal system, such as the lack of reasonable use rules for machine learning, excessive restrictions on data crawling and utilization in the application of the Anti-Unfair Competition Law, and unclear and inconsistent rules for the circulation and utilization of public data. Second, the development of the artificial intelligence industry still has problems such as market access difficulties in some fields, limited experimental areas, vague legal rules, high regulatory costs, and excessive tort liability. It is necessary to scientifically define responsibilities and optimize regulatory mechanisms. Third, the intellectual property protection system for artificial intelligence innovation is still imperfect, and there is a lack of effective incentive mechanisms for enterprises that actively develop artificial intelligence value alignment mechanisms and regulatory technologies. Fourth, due to the security risks, algorithm discrimination, leakage of personal information, and infringement of workers' rights and interests in some artificial intelligence applications, coupled with the public's concerns about unemployment and technological alienation, and insufficient risk defense and regulatory capabilities, the trust of society in artificial intelligence has been affected and the application of artificial intelligence has been restricted. In addition, from an international perspective, the United States and some other countries have suppressed Chinese information technology companies in the name of so-called "national security", restricted their products and services, and continuously upgraded restrictions on chip exports to China, attempting to curb my country's innovative development in cutting-edge science and technology fields such as artificial intelligence, which also poses serious challenges to my country's current development of artificial intelligence.
Whether it is to make good use of strategic opportunities or to respond to domestic and foreign challenges, it is urgent to focus on the key issues of insufficient capabilities in the development of artificial intelligence, build an empowering artificial intelligence governance mechanism, provide more favorable institutional conditions for the innovative development of artificial intelligence, and provide incentives and more stable institutional expectations for enterprises to invest in artificial intelligence innovation. At the same time, in the construction of foreign-related rule of law, a new mechanism that can effectively respond to foreign containment and suppression should be established and improved, so as to improve my country's ability to participate in the construction of the global artificial intelligence governance system, help my country's artificial intelligence companies and industries seize the commanding heights of innovation and development, and provide strong scientific and technological support for Chinese-style modernization.
2.3 Basic characteristics of AI risks
As the application fields of artificial intelligence continue to expand, the potential risks of artificial intelligence have also attracted increasing social attention. Many studies have actively explored the risk issues and countermeasures of artificial intelligence. When building an artificial intelligence risk governance mechanism, it must be noted that the risks brought by artificial intelligence have some common characteristics of public risks in the sense of modern society.
First, artificial intelligence risks are public and large-scale. This kind of public risk in modern society is largely beyond the direct understanding and control of individual risk bearers. Individual citizens usually lack the ability to recognize, prevent and negotiate such public risks, so it is often difficult to make rational choices based on a full understanding of the risks. The rational basis for citizens to bear risks on their own on the basis of autonomy has been shaken, bringing about the necessity of government intervention and legal governance.
Second, artificial intelligence risks have two sides. Any kind of innovation activity is accompanied by the unknown. While bringing risks, it also provides new development opportunities. Giddens described a "risk matrix" that includes both opportunities and innovations, as well as safety and responsibility. He pointed out that "risk is not just a negative phenomenon that needs to be avoided or minimized; it is also a dynamic rule in a society that is separated from tradition and nature"; "opportunity and innovation are the positive side of risk" and "active participation in risk is a necessary component of social and economic mobilization". For example, while assisted driving and autonomous driving technologies may bring new risks of traffic accidents, they may also greatly reduce the risks caused by human drivers' fatigue driving, slow response, etc., and improve traffic safety overall. Therefore, the attitude towards risk is accompanied by value judgment, integrating the judgment and comparison of the benefits and damages of a thing or behavior. Based on the rational calculation of the probability of the possible scale of damage, risk may become an opportunity for people. We must face the two-sided nature of artificial intelligence risks with dialectical concepts and dialectical thinking, and promote the positive value and positive significance of scientific and technological innovation to society.
Finally, artificial intelligence risks are controllable to a certain extent. The risks brought by artificial intelligence are a kind of "man-made risks". The reason why society allows this kind of man-made risk to exist is, on the one hand, because it has two sides and can bring positive significance to promoting social development; on the other hand, it is because people believe that this risk can be structurally controlled to a large extent by guiding and regulating people's activities, and society can overcome the side effects of development through intentional preventive actions and institutionalized measures. Modern social life is reflexive. Social practices are always being tested and transformed by new understandings of these practices themselves, thus constantly changing their characteristics in composition. We should strive to improve our ability to recognize and prevent and control artificial intelligence risks, apply them to society and change the pattern of risks.
Based on the characteristics of artificial intelligence risks and the cognition of the occurrence and control laws, we "need protection against risks, but also need to have the ability to face risks and treat them in a positive way." The two-sided nature of risks determines the duality of governance goals. We need to pay attention to the opportunities contained in risks, and through the implementation of empowering governance, enhance the ability of society to recognize and prevent and control artificial intelligence risks in development, so that the results of artificial intelligence development can truly serve the improvement of human welfare.
2.4 Existing deficiencies in AI governance capabilities
At present, society is generally lacking in the ability to recognize and evaluate the risks of artificial intelligence, social supervision, internal autonomy of enterprises, and external supervision of the government. It is necessary to strengthen the governance capacity building required in the era of artificial intelligence with "empowerment" as the core.
First, the ability to recognize and evaluate the risks of artificial intelligence is insufficient, and social supervision is lacking. Comprehensively understanding and grasping the risks of artificial intelligence is an important basis for risk governance. However, due to insufficient and untimely risk information of artificial intelligence, various entities such as enterprises, the public, and government departments often lack a clear understanding of the risks of new technologies and applications of artificial intelligence, resulting in a polarization of taking artificial intelligence risks lightly or overly fearing them.
At present, there are defects in the ability to recognize, evaluate, and supervise the risks of artificial intelligence. There are three main reasons: First, the complexity of artificial intelligence itself. Artificial intelligence has the characteristics of "black box", a certain degree of "autonomy", unpredictable operation results, difficult to explain operation mechanisms, possible adaptability according to environmental changes, and high-speed iterative update development, all of which pose challenges to risk recognition, evaluation, supervision, and prevention and control. Second, artificial intelligence has a wide range of social applications. Artificial intelligence is widely used in all aspects of social life. Its risk identification and assessment requires not only the participation of artificial intelligence technology experts, but also the participation of experts in the application field, jurists, ethicists, sociologists, etc., in order to more comprehensively evaluate the impact of artificial intelligence on the whole and in specific scenarios. Third, the digital divide problem has intensified. At present, the public's understanding and application capabilities of artificial intelligence vary greatly, which not only affects the inclusive application of artificial intelligence technology, but also makes the groups that lack understanding more vulnerable to artificial intelligence risks. All three aspects determine the need for targeted capacity building for artificial intelligence governance.
Secondly, enterprises lack the autonomy to deal with artificial intelligence risks. As the main developers and deployers of artificial intelligence, enterprises have the opportunity to adopt self-governance for possible risks during their research and development and deployment. However, a large number of enterprises, especially small and medium-sized enterprises, have limited autonomy and lack the necessary technical measures and management measures. It is often difficult to transform artificial intelligence ethical norms and governance principles into effective specific measures. At present, technologies that can improve the security, accuracy, robustness, explainability, fairness, inclusiveness, etc. of artificial intelligence still need to be vigorously innovated and developed. In addition, the cost of technology research and development and implementation is too high, and enterprises are often reluctant to govern themselves. There are also many AI companies that lack computing power resources and high-quality training data, which also restricts the ability of companies to effectively govern themselves.
Finally, the government's ability to scientifically supervise and promote development is insufficient. On the one hand, scientific and effective supervision requires theoretical support, mechanism construction and technical empowerment. Insufficient and asymmetric information on the development and laws of AI, AI risks and effective risk governance methods, coupled with insufficient resources and lack of tools, make the operability, effectiveness and rationality of government supervision methods vulnerable to questioning and challenges. In the face of the realization of new governance concepts for emerging technologies, the country's infrastructure and administrative capabilities are more needed to assist, including information collection and scientific analysis capabilities, risk judgment capabilities, and the ability to absorb positive market factors. On the other hand, promoting the development of AI from the government's perspective also requires capacity building. Elements such as computing power and data, as well as the provision of testing, evaluation, and certification services all require necessary institutional support. The government's promotion of the integrated application of AI in the fields of people's livelihood services, social governance, and economic development also requires the construction of matching new risk governance capabilities to ensure its sustainable development.
Therefore, in response to the existing defects in AI governance, it is necessary to build empowering governance and high-level governance capabilities to promote the high-quality, efficient, safe and reliable development of AI.
2.5 Limitations of the existing AI regulatory model
Existing regulatory models and theories provide useful references for AI governance. However, facing the development trend of the new generation of AI, its limitations are gradually revealed.
Existing regulatory models mainly include command-control regulation, suggestion-persuasion regulation, response-type regulation, meta-regulation, etc., and their differences can be examined from the aspects of regulators, regulated objects, command types and consequence types. Command-control regulation usually formulates specific orders in advance, and the consequences of violation are usually relatively strong sanctions. However, ideally, the orders under this model should have high rule accuracy, which is difficult to achieve at the moment when AI is still developing rapidly. The suggestion-persuasion regulatory model advocates the establishment of a cooperative rather than confrontational situation. The command form is often a general goal and mainly relies on corporate self-regulation, but the effect is often greatly reduced in the face of corporate interest drive. In the field of AI, there are many problems such as algorithm black box. Therefore, it is difficult for these two regulatory models to achieve ideal results, resulting in the "Collingridge dilemma".
Many researchers are looking for a more comprehensive and dynamic regulatory model between these two poles. Responsive regulation aims to bridge the gap between strong regulation and relaxed regulation. It assumes that regulatory activities take place in a dialogic and interactive environment. Regulators usually give priority to less intrusive measures, but if the measures fail, regulators will gradually adopt more punitive or compulsory measures. Its most prominent feature is the dynamic responsiveness of consequences. The responsive regulatory model can derive many specific forms, often combined with meta-regulation. Meta-regulation is the regulation of corporate self-regulation. The government requires and shapes the internal self-regulation of enterprises by proposing general objectives, mobilizing the subjective initiative of enterprises, allowing enterprises to formulate appropriate specific regulations based on their own information and technological capabilities, and providing behavioral incentives through legal consequences in external regulation. This regulatory model has been widely used in the field of digital technology. However, the condition for its success is that the government, as an external regulator, needs to establish the ability to obtain a series of key information such as risk conditions, the cost and effectiveness of regulatory measures taken by enterprises, and the level of relevant technological development in the industry, and be able to provide reasonable punishment and incentive mechanisms, so as to effectively encourage enterprises to actively carry out self-regulation while ensuring development. At the same time, enterprises also need to have the information, resources and capabilities to effectively implement self-regulation at a reasonable cost, and special consideration should be given to the self-regulatory capabilities of small and medium-sized enterprises. For example, "information symmetry" was originally an important advantage of self-regulation, but in the face of the rapid development of artificial intelligence, it is actually difficult for enterprises to fully grasp the risk conditions and effective measures of new applications of artificial intelligence. Therefore, the application of this type of regulatory model in the field of artificial intelligence has put forward higher requirements for related capacity building.
The above-mentioned regulatory models form a spectrum from the strictest to the most relaxed, providing an important foundation for AI regulation. In particular, based on the responsive regulatory and meta-regulatory models, researchers have conducted many active explorations in finding regulatory models that are compatible with AI governance. However, in terms of concept, there is still a lack of emphasis on the construction of various capabilities that are urgently needed for the safe and reliable development of AI, that is, there is a lack of emphasis on the dimension of "empowerment". At the current stage when human society is moving towards the intelligent era but the ability to develop safe and reliable development has yet to be built, an effective regulatory state cannot be achieved overnight. It is necessary to upgrade the discussion of basic regulatory models to the research on capacity building, clarify the goal of "empowerment", and realize the practical implementation of good regulation.
In summary, the current development of the artificial intelligence industry requires empowerment, and the formation of a safe and reliable technology and security system requires even more empowerment. Security is not just a state, but also a capability. my country's National Security Law defines national security as "the state power, sovereignty, unity and territorial integrity, people's well-being, economic and social sustainable development and other major interests of the country are relatively free from danger and internal and external threats, as well as the ability to ensure a continuous state of security." This view of security capabilities is also highlighted in my country's legislation in the field of digital technology. The Cybersecurity Law defines cybersecurity as "the ability to keep the network in a stable and reliable state of operation, and to ensure the integrity, confidentiality and availability of network data." The Data Security Law defines data security as "ensuring that data is in a state of effective protection and legal use, and having the ability to ensure a continuous state of security." The importance of "capability" can be seen from these definitions. At the same time, these three laws also clearly define the goal of promoting development, not only for balance considerations, but also because real security should and can only be security based on high development.
At the current stage, not developing is the greatest insecurity. The goal of AI governance is not to simply pursue a local, temporary, false security state, but to pursue a state with a highly developed level of science and technology and the ability to ensure continuous security and trustworthiness. An effective governance mechanism to promote capacity improvement should be built with this goal in mind. Therefore, the core of empowerment-based AI governance is to achieve a high level of AI development and the ability to ensure a continuous, safe and trustworthy state through scientific and effective governance, with empowerment as the core. That is, in response to the important capacity building needs in the safe and trustworthy development of AI, the following empowerment is achieved through governance: First, empowering enterprises to innovate and develop, improving the overall competitiveness of my country's enterprises and the AI industry, especially enhancing the inclusive empowerment of development factors, including the collaborative empowerment of data factors, computing power factors, algorithm factors, and institutional factors. Second, empowering enterprises to regulate trustworthiness, especially focusing on the popularization and development of risk management capabilities, so that large, medium and small enterprises can carry out effective self-management and prevent and control ethical and security risks in the research and development and application of AI from the source. Third, empowering the government to effectively regulate, especially actively developing and promoting relevant technologies for AI governance, and achieving innovative breakthroughs in AI governance technology. Fourth, enable the public to benefit equally from the development of artificial intelligence, enable the public to effectively participate in the governance of artificial intelligence, enhance the public's trust in the development of artificial intelligence, and create a social environment conducive to the credible development of artificial intelligence. Fifth, enable my country to deeply participate in the construction of the global artificial intelligence governance system, and provide Chinese experience, Chinese wisdom and Chinese strength to promote the modernization of the global artificial intelligence governance system.
3 Establishment of the concept of empowering AI governance
The construction of empowering AI governance requires the establishment of concepts that are compatible with it: it is necessary to establish core concepts, that is, to clarify how to understand the important relationships involved in empowering AI governance, so as to establish the correct direction and boundaries for "empowerment"; it is also necessary to develop basic concepts from the core concepts to more specifically guide the construction of empowering AI governance mechanisms.
3.1 The core concept of empowering AI governance
The core concept of empowering AI governance is based on two major relationships, namely the relationship between people and things (intelligent entities), and the relationship between development and security.
3.1.1 Adhere to people-oriented and promote harmony between man and machine
Taking people-oriented as the core concept, it is clear that the ultimate goal of enabling the safe and reliable development of artificial intelligence is to empower the people, and the direction and evaluation criteria for empowerment-oriented governance are established. People-oriented is the foundation of trustworthy artificial intelligence, the primary concept and ultimate concern of artificial intelligence governance, which is reflected in starting from human welfare, human security, human dignity and rights, and the all-round development of people, fully respecting and protecting human rights, and ensuring that artificial intelligence solutions are people-oriented and the use of artificial intelligence is people-centered. The development of artificial intelligence is ultimately to provide high-quality public services for all people and ensure that the achievements of artificial intelligence technology are shared equally and universally. People-oriented is the common value of all mankind. The laws, regulations, policy documents, program declarations, ethical guidelines, etc. of the major countries in the world that are at the forefront of artificial intelligence development and governance almost all take people-oriented as their core concept and fundamental principle. In the governance of artificial intelligence, the United Nations and related global summits also advocate people-oriented, such as the "Asilomar Artificial Intelligence Principles" issued in 2017, the "Recommendation on the Ethical Issues of Artificial Intelligence" adopted by the 41st General Assembly of UNESCO in 2021, the "Bletchley Declaration on Artificial Intelligence" signed by 28 countries including China and the United States in the UK in 2023, and the resolution on strengthening international cooperation in artificial intelligence capacity building proposed by China, which was adopted by consensus at the 78th United Nations General Assembly in 2024, etc.
In the development and governance of artificial intelligence, my country has always adhered to the core concept of people-oriented. Laws and regulations such as the "Civil Code" and the "Personal Information Protection Law" fully reflect the people-oriented spirit and focus on the protection of citizens' personal information rights, privacy rights, portrait rights, voice rights, and the right to be free from automated decision-making in the intelligent era. On October 18, 2023, my country issued the "Global Initiative on AI Governance", and Article 1 clearly advocated: "The development of AI should adhere to the concept of 'people-oriented', with the goal of enhancing the common welfare of mankind, and with the premise of ensuring social security and respecting human rights, to ensure that AI always develops in a direction that is conducive to the progress of human civilization." People-oriented is not an empty concept label, but should be implemented as the rule of law principle and legal discourse of AI governance, guiding the core direction of empowerment. People-oriented is first reflected in the principle of human rights, respecting and protecting human rights; secondly, it is reflected in fairness and justice, attaching importance to algorithmic fairness, focusing on digital barrier-free design, narrowing the digital divide, and achieving inclusive development; finally, it is reflected in ensuring the subjectivity and autonomy of humans in the operation of AI, realizing "human-in-the-loop" control, developing methods such as human-machine collaborative control, and ensuring that humans can always substantially supervise and control AI.
3.1.2 Adhere to development orientation and coordinate development and security
Development is the eternal theme of human society. Adhering to the development orientation and coordinating development and security is another core concept of empowering AI governance.
All countries in the world must consider how to balance development and security in AI governance. At the last moment before voting on the "AI Act", the European Parliament was still debating how to balance the promotion of innovation and the prevention of risks, highlighting that how to handle the relationship between development and security is a basic issue and difficult problem in AI legislation.
In the process of China's modernization in the new era, the relationship between development and security has evolved into the relationship between high-quality development and high-level security. High-quality development should promote high-level security, and high-level security should ensure high-quality development. The "Decision" points out that "accelerating the construction of a new development pattern and promoting high-quality development", "building a safer China at a higher level, improving the national security system", and "achieving a benign interaction between high-quality development and high-level security". This has determined the fundamental principle for AI governance. This core concept should be integrated into all aspects of AI governance, security and development should be deployed in a coordinated manner, and an ecosystem conducive to the safe and reliable development of AI should be built.
To correctly understand and coordinate the relationship between AI development and security, we should start from my country's AI development level and governance capabilities, and conduct forward-looking, targeted, and reserve research on AI strategic issues. Development is the foundation of security. History and reality have proved that "development is the foundation and key to solving all problems in my country". The development of artificial intelligence technology in my country is still in the catching-up stage. Today, when artificial intelligence technology has become the core competitiveness of the country and even the strength of the game between major powers, the lack of development of artificial intelligence is the biggest insecurity, and insufficient development is the biggest risk. The ability to ensure the safety and reliability of artificial intelligence and the ability to ensure that the major interests of the country and the people in the era of artificial intelligence are in a state of continuous security can only be built on the basis of a high level of development. Artificial intelligence governance should leave sufficient space and time for its innovative development. Relevant policies and legal systems should follow and respect the laws and realities of the research and development and application of the new generation of artificial intelligence, take the development of artificial intelligence and better play its role as the fundamental starting point, focus on granting rights, reducing obligations, scientifically supervising, and providing services, and prevent excessive supervision from causing substantial damage to innovation and development. We must seize the initiative in future development and enhance the competitiveness, development, sustainability, security and leadership of my country's artificial intelligence.
Security is the condition and guarantee for development. Without security mechanisms and security measures, there is no way to talk about the credible development of artificial intelligence. The important tasks of AI governance are: first, to draw a clear safety bottom line; second, to adhere to the overall national security concept in development, and promote the security system and capacity building of the entire process of AI; third, to adhere to the common, comprehensive, cooperative and sustainable global security concept as a strategic weapon to counter the suppression of some countries on the development of AI in my country and create a security barrier and a safe environment. Adhere to the development orientation and coordinate the relationship between development and security. We must adhere to good laws and good governance, promote the benign interaction between development and security on the track of the rule of law, and effectively achieve a stable balance between the development of the AI industry and security governance, so that they complement each other, and promote industrial upgrading and the development of new quality productivity with a safe and reliable new generation of AI technology.
3.2 Basic Concepts of Empowering AI Governance
Based on the above core concepts and domestic and foreign AI governance experience, the empowering AI governance concept can be further expanded into the following basic concepts:
3.2.1 Intelligence for good
Intelligence for good is not only a requirement of the people-oriented concept, but also points out a specific path for coordinating development and security, and guides the continuous construction of safe and reliable development capabilities. Intelligence for good is the embodiment of the principles of science and technology for good and digital for good in the field of artificial intelligence technology. Its core is to promote individuals, enterprises, industries and other entities engaged in the research, development, provision and use of artificial intelligence to abide by social public order and good customs, socialist core values and common values of all mankind, develop and use artificial intelligence for good, meet the needs of the people for a better life for intelligent technology, and continuously enhance the common welfare of mankind; at the same time, promote government departments and social organizations to govern artificial intelligence for good, prevent malicious development and application of artificial intelligence technology, eliminate the digital divide, promote digital justice and social justice, and promote the progress of human civilization. The "Global Artificial Intelligence Governance Initiative" issued by my country specifically proposes that "the development of artificial intelligence should adhere to the concept of 'intelligence for good'", and its purpose lies in this.
Looking to the future, ensuring that the development and application of artificial intelligence technology follows the concept of intelligence for good is the key to artificial intelligence governance. First, under the guidance of the core concept of people-oriented and development-oriented, we need to continuously improve our understanding of the value goals ("good") in the field of artificial intelligence applications, and add the goal of "good" to the goal system of artificial intelligence research and development, and realize it in the entire process of technology research and development. Secondly, we need to rely on specific mechanisms and measures to promote the development of artificial intelligence technology for good, especially to promote the innovation and application of technical measures and management measures that can support the alignment of artificial intelligence values and security and reliability. The innovative development of various types of science and technology is not balanced and in step. We must strengthen ethical and legal guidance in response to the "research bias" of enterprises to ensure that enterprises operate on the track of "intelligent goodness". While ensuring the innovative development capabilities of enterprises, we must enable innovation in scientific and technological research and development and management systems that are beneficial to the alignment of artificial intelligence with social ethical values.
3.2.2 Tolerance and prudence
Inclusiveness and prudence are the embodiment of the concept of adhering to the development orientation, coordinating development and security, which is helpful to empower enterprises to innovate and develop. It is a consistent and effective advanced concept in the field of science and technology in my country, and is reflected in laws, administrative regulations and departmental regulations. Article 35 of the "Science and Technology Progress Law" stipulates: "The state encourages the application of new technologies, and in accordance with the principle of inclusiveness and prudence, promotes the application of new technologies, new products, new services, and new models to create conditions for the application of new technologies and new products." In 2023, Article 3 of the "Interim Measures for the Management of Generative Artificial Intelligence Services" jointly issued by the Cyberspace Administration of China and other 7 departments stipulates that inclusiveness and prudence and classified and graded supervision shall be implemented for generative artificial intelligence services. Inclusiveness and prudence are dialectically unified with technological empowerment. In March 2024, Premier Li Qiang pointed out during his research in Beijing that artificial intelligence is an important engine for the development of new quality productivity. On the premise of keeping the bottom line of safety, we must actively promote inclusive and prudent supervision and give new technologies sufficient innovation space and necessary trial and error space.
As a basic concept of AI governance, inclusiveness and prudence are essential to adopt a moderately tolerant attitude towards new AI technologies, new products and new business forms, allowing them to correct problems in the process of R&D and application on their own while maintaining the bottom line of safety. The government will only intervene appropriately and appropriately while carefully tracking and observing. In other words, the emergence of new AI technologies, including disruptive technologies, should be actively embraced, inclusive and prudent in governance concepts and rules and strategies. There is no need to rush to regulate or punish new situations and new problems caused by them, so as to prevent them from being strangled in the bud due to improper supervision. From the perspective of traditional regulatory theory, inclusive and prudent supervision reflects the enforcement strategy of responsive regulation, the operating paradigm of cooperative regulation and the evolutionary logic of regulatory experimentalism. Adhering to the concept of inclusiveness and prudence is conducive to promoting the dynamic balance between development and safety, fairness and efficiency, self-discipline and external discipline, maximizing the encouragement and support of innovation, and enabling the development of science and technology industries.
3.2.3 Agile governance
Agile governance is a path for dynamically developing industries to coordinate development and security, and is a concept for building dynamic mechanisms to enhance governance capabilities. The concept of agility is to some extent borrowed from the "agile development" in the field of software engineering in the 1990s. In 2018, the World Economic Forum reflected on the policy-making issues in the Fourth Industrial Revolution and formally proposed "Agile Governance", defining it as adaptive, people-oriented, inclusive and sustainable policy-making, and guiding more and more stakeholders to actively participate. The delegates believed that this is a continuous preparation to quickly master changes, actively or passively embrace changes and learn from changes, while contributing to actual or imagined end-user value.
In 2022, the General Office of the CPC Central Committee and the General Office of the State Council issued the "Opinions on Strengthening the Governance of Science and Technology Ethics", listing agile governance as one of the five governance requirements, emphasizing "strengthening the early warning and tracking and judgment of science and technology ethics risks, timely and dynamically adjusting governance methods and ethical norms, and quickly and flexibly responding to ethical challenges brought about by scientific and technological innovation". Agile governance is also emphasized in documents such as the "Global Initiative on the Governance of Artificial Intelligence". Previously, in 2019, the China National New Generation Artificial Intelligence Governance Committee issued the "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence", which listed agile governance as one of the eight principles and gave a specific explanation; in its "New Generation Artificial Intelligence Ethics Code" released in 2021, it also stipulated "promoting agile governance" in the management code section. It can be seen that agile governance has become a basic concept and principle in my country's artificial intelligence governance. Its core significance lies in emphasizing respect for the laws of artificial intelligence development and keeping track of judgments, emphasizing rapid response and early intervention in the rhythm of governance, promoting the effective combination of elastic principles and specific typified rules in governance rules, interactive cooperation in governance relations, and guiding governance with fast processes and light intensity in governance methods.
Agile governance is a governance paradigm proposed for new fields with rapid development and wide influence. The rapid iteration of scientific and technological development and industrial applications of artificial intelligence requires timely adjustment of governance strategies and measures in the process of research and development and application according to development changes or new information to ensure the safety, reliability and controllability of artificial intelligence. Agile governance places special emphasis on forward-looking vision and methods, especially trying to predict and judge problems before they arise. In this sense, agile governance is also a kind of "preventive rule of law". Therefore, agile governance requires the establishment of stable and good information acquisition capabilities and mechanisms to ensure that extensive, diverse and sufficient opinions can be obtained in a timely manner when facing uncertain issues. It also requires the establishment of sound risk communication, risk reporting and early warning mechanisms, and the continuous strengthening of risk research and prediction capabilities.
3.2.4 Sustainability
The core concept of people-oriented and development-oriented also determines that sustainable development should be taken as a basic concept. Sustainable development emphasizes that while promoting technological development, such development should promote the long-term health and balance of the economy, society and environment, including environmental friendliness (green development), resource conservation, reducing inequality and digital divide, promoting education and employment, improving the quality and accessibility of public services, respecting and protecting cultural diversity, encouraging technology openness and sharing, and supporting global cooperation. Artificial intelligence technology itself also needs to focus on sustainable development, treat innovation and investment rationally, and learn from the two past "artificial intelligence winters".
The concept of sustainable development in artificial intelligence governance is becoming a global consensus. The "Recommendation on the Ethics of Artificial Intelligence" issued by UNESCO recommends that governments and institutions should fully consider the impact of artificial intelligence technology on the United Nations Sustainable Development Goals. The "Global Initiative on Artificial Intelligence Governance" issued by my country also proposes to actively support the use of artificial intelligence to promote sustainable development and respond to global challenges such as climate change and biodiversity conservation. These should become the basic concepts, value standards and action guidelines that artificial intelligence governance should adhere to, so as to enable the public to benefit equally from the development of artificial intelligence and enable my country to deeply participate in the construction of the global artificial intelligence governance system.
4 Mechanism Construction for Enablement-based AI Governance
In the design of AI governance mechanisms, we should consciously take "empowerment" as an important goal and integrate the aforementioned core and basic concepts into the design of various mechanisms related to AI governance. The specific mechanisms of empowering AI governance still need to be continuously studied and developed. At present, we should build an AI governance mechanism with the rule of law as the core, focus on key issues, and build various specific mechanisms under the leadership of the rule of law.
4.1 Building an enabling AI governance mechanism with the rule of law at its core
The reason why we need to build an AI governance mechanism and system with the rule of law as the core is that, in principle, "the rule of law is the basic way of governing the country", "it is the basic means of modern social governance", "rule of law governance is the most reliable and stable governance", "it is an important guarantee for China's modernization", and it is the key support for implementing the core concept of people-oriented and development-oriented and enabling the safe and reliable development of AI.
First, the inevitable trend of AI governance is to move from "soft law" governance to a governance form guided and guaranteed by "hard law" and combining "soft law" with "hard law". In the past, AI governance in major AI countries mainly relied on "soft law", that is, AI science and technology ethics, industry norms and technical standards, etc. However, with the development of AI, especially the spillover of security risks of disruptive technologies such as AI big models, the limitations and low efficiency of "soft law" are becoming increasingly apparent, and it is difficult to give companies clear expectations. Therefore, AI governance is shifting from "soft law" governance to a new form of "hard law" governance guided and guaranteed by "hard law" and "soft law" and "hard law" collaborative governance. This marks that AI governance is advancing in an orderly manner on the track of the rule of law.
Taking "hard law" as guidance and guarantee does not mean ignoring the advantages and role of "soft law", but paying attention to the supporting role that "hard law" can play for "soft law" while giving full play to the positive role of "soft law". In the AI governance system, "soft law" such as science and technology ethics, moral norms, industry norms, technical standards, corporate regulations, and international declarations have their irreplaceable positive role, especially in the science and technology ethics, industry norms and technical standards in specific application scenarios. The guidance and institutional support of "hard law" to "soft law" can make it play a better role, so as to form a situation of "internal and external integration, law and Germany co-governance". At present, the international community has begun to pay attention to the legalization and standardization of AI governance. For example, the EU has successively passed a series of important laws in the digital field, and formulated and passed the "Artificial Intelligence Law" on the basis of the "Trustworthy Artificial Intelligence Ethics Guidelines" issued in 2019. This marks the EU's transformation and upgrading from "soft law" to "hard law" in AI governance. The United States has issued a series of executive orders, such as the Executive Order on the Safe, Reliable and Trustworthy Development and Use of Artificial Intelligence, which has made clear work arrangements and requirements for administrative departments.
The "hard law" guarantee for AI governance in my country still lags behind the needs of AI development and security. Although my country has formulated and implemented laws and administrative regulations and local regulations related to AI, such as the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law, and has issued a number of relevant departmental regulations such as the Interim Measures for the Administration of Generative AI Services, there is still a lack of special AI laws and administrative regulations, especially a basic law on AI. The "New Generation Artificial Intelligence Development Plan" issued by the State Council in 2017 proposed that "by 2025, preliminary AI laws and regulations will be established, and by 2030, more complete AI laws and regulations will be established". In order to promote the implementation of this legislative plan, we should accelerate the pace of legislation on the basis of adhering to the principles of scientific legislation, democratic legislation, legislation in accordance with the law and ensuring the quality of legislation, combine "small, fast and flexible" legislation with "big volume" legislation, and form an AI legal system in about five years to create a legal environment conducive to the safe and reliable development of AI.
Second, the rule of law is the only way to promote good governance in the field of artificial intelligence. AI governance needs to implement the core concept of people-oriented and development-oriented, so that the field of artificial intelligence is both standardized and orderly and full of vitality. Achieving this goal is the "good governance" we expect. The rule of law is the only way to good governance. The rule of law can establish necessary norms and effective governance mechanisms, eliminate many uncertainties in artificial intelligence with legal certainty, and enable the subjects of artificial intelligence research and development, provision, and deployment to have stable expectations for their own activities and their results, guarantee and promote enterprises to operate in accordance with laws and regulations, artificial intelligence technology to develop for good, citizens to use it according to law, and improve social governance capabilities, thereby reducing the occurrence of uncertainties and related problems, and enhancing the confidence of society in the development and application of artificial intelligence.
Third, the rule of law is an inevitable requirement for creating an innovative development environment for artificial intelligence. General Secretary Xi Jinping pointed out that "the rule of law is the best business environment." Based on this scientific judgment, it can be considered that the rule of law is the best development environment for artificial intelligence. First, the rule of law, with its "rights-based" value orientation, constructs a legal system for the protection of property rights and other rights in the field of artificial intelligence, stimulates social innovation vitality, and enhances independent innovation capabilities. Second, maintaining a fair competition rule of law can "stimulate the vitality of market players and make all sources of power conducive to the development of social productivity flow fully." Third, the rule of law implements hierarchical and classified supervision, taking prudent supervision for a small number of high-risk artificial intelligence, and providing as wide a space for innovation as possible for a large number of artificial intelligence applications with lower risks. Fourth, the rule of law creates a safe political environment, a stable social environment, a fair legal environment, and a high-quality service environment for the development of artificial intelligence.
4.2 Improve the mechanism for integrating legal governance with technical governance
The organic integration of science and technology and the rule of law is the only way to govern artificial intelligence and the focus of governance capacity building in the era of artificial intelligence. Since the birth of the Internet and digital technology, researchers represented by Lessig have begun to explore the relationship between digital technology and law and promote the benign interaction between digital technology and law. Promoting "ethically aligned design", "ethics by design", and "privacy by design" are considered to be the key paths to govern digital technology. Brunsward proposed the concept of "Law 3.0", emphasizing the use of technical solutions to achieve policy goals, that is, regulators should adopt (or let others adopt) technical management measures to transform normative views into practical designs. In legislation and legal application, we should fully examine the development of science and technology, promote the development of intelligent technology for good, promote the universal accessibility of key elements and governance technologies, and use technology to empower corporate autonomy and government supervision, so that the entire society can welcome the arrival of the era of artificial intelligence in a more efficient and controllable manner. In view of the current status of AI development and governance, there are two important tasks to improve the mechanism of integrating legal governance with technical governance:
First, starting from the core concept of adhering to development orientation and coordinating safe development, it is necessary to deeply analyze the institutional needs of scientific and technological development in the construction of the rule of law and empower enterprises to innovate and develop. The improvement of technical governance capabilities itself needs to be based on the development of AI technology. This requires the construction of the rule of law to grasp the development factors, development stages and development needs of AI technology, explore the key production factors and whether there are insufficient supply, market failure, etc., systematically interpret or improve relevant laws, and reasonably promote the provision and utilization efficiency of key factors. At present, for AI companies, the empowerment of scientific and technological factors is mainly the empowerment of algorithms, computing power, and data. The insufficient and unfair supply and circulation of these factors may affect AI innovation, especially the fair competition of small and medium-sized enterprises. The accessibility and quality of data will also affect the fairness, inclusiveness, accuracy, and security of AI systems. It is necessary to establish a scientific and effective legal system to promote the improvement of factor quality and universal accessibility, and empower the development of AI technology and industry.
Secondly, starting from the concept of intelligence for good, it is necessary to promote enterprises to continuously improve the security and reliability of artificial intelligence technology through the rule of law, promote the innovation of compliance technology and regulatory technology, and empower enterprise governance and government supervision. On the one hand, research and development and the adoption of necessary technical measures should be linked with the duty of care of enterprises to encourage the development of corresponding technologies; on the other hand, attention should be paid to continuously analyzing and sorting out relevant information on governance technology and management measures, attaching importance to the "precision" of rules, and combining abstract and flexible legal rules with clear and specific standards and action guidelines. The construction of these guidelines and standards should also pay attention to the consistency between laws, administrative regulations and technical solutions to avoid misunderstandings of laws by technical standards.
4.3 Develop a co-governance mechanism for communication and collaboration among multiple entities
Effective and timely communication and collaboration among multiple subjects is an inevitable requirement for realizing the concept of inclusive, prudent and agile governance and improving risk identification and prevention capabilities. The effective exchange and sharing of risk information and governance information is an important mechanism for social governance. Risk governance of artificial intelligence requires the establishment of a more timely and effective communication mechanism. In recent years, my country has seen examples of communication based on social supervision and regulatory agencies in areas such as news recommendation and platform labor algorithms, which have promoted the improvement of intelligent algorithms. In the future, communication of artificial intelligence risk information and governance information should be further advanced, institutionalized and normalized.
First, in response to the current problems of lack of information communication and difficulty in governance dialogue among cross-domain subjects, it is necessary to establish an institutionalized and normalized risk communication mechanism led by regulatory agencies and participated by multiple subjects such as artificial intelligence companies, scientific researchers, industry organizations, news media, and the public. The professional fields of participating subjects should cover artificial intelligence technology, ethics, law, economics, management, sociology, journalism and communication, education, psychology, etc., continue to build and release useful information, improve transparency, improve convenient feedback channels and processing procedures for users and the public, promote the construction of timely and convenient risk reporting and emergency response mechanisms, and establish and improve the legal effect evaluation mechanism after legislation.
Secondly, we should innovate and expand the experimental regulatory mechanism. Establishing a regulatory sandbox is a typical way. Before new technologies and applications of artificial intelligence that may be high-risk enter the market, a regulatory sandbox with controllable impact can be established, and the corresponding regulatory agencies can enter to observe, communicate and guide, to help innovators better understand compliance requirements, establish effective risk control mechanisms, and bring product safety to the market. The "Notice on the Trial Implementation of the Automobile Safety Sandbox Regulatory System" (No. 6 of 2022) jointly issued by the State Administration for Market Regulation, the Ministry of Industry and Information Technology and other five departments pointed out that automobile safety supervision has important significance for improving emergency response capabilities, preventing and resolving major risks, protecting the legitimate rights and interests of consumers, encouraging corporate technological innovation, and advocating best safety design practices. In experimental supervision, we can fully carry out innovation and experimentation in regulatory methods, comprehensively use appropriate regulatory technology, carry out social governance experiments on artificial intelligence, conduct in-depth assessments of risk situations, and explore scientific and effective risk prevention and control measures.
Finally, we should accelerate the construction of an international cooperation mechanism for artificial intelligence governance and capacity building to provide international cooperation and rule guarantees for the safe and reliable development of artificial intelligence. On the one hand, we should clearly establish my country's AI governance concept in domestic legislation, clarify my country's foreign aid for AI capacity building, enhance the independent and sustainable development capabilities of developing countries, promote international development cooperation, and insist on respecting the sovereignty of other countries. On the other hand, we should actively promote the formation of an international AI governance framework with broad consensus, establish an international circulation and sharing mechanism for important AI risk information and a collaborative mechanism for cross-border risk prevention and control.
4.4 Establish a safe harbor mechanism compatible with the development of artificial intelligence
In the process of Internet development, safe harbor rules have played an important role in ensuring the innovative development of the Internet and promoting social cooperative governance. In the early stage of the development and application of artificial intelligence, enabling enterprises to innovate and develop requires the law to respect the laws of scientific and technological development and clearly establish a "safe harbor" mechanism and specific rules that match the development stage and specific circumstances.
From the data level, data utilization is extremely important for the development of the artificial intelligence industry. Some scholars pointed out that the main institutional increment needed by my country's data circulation exchange lies in establishing reasonable liability rules for the legal consequences of data trading-related behaviors, such as establishing safe harbor rules for on-site data transactions that meet the conditions in data exchanges.
From the perspective of algorithm models and social applications, the current mainstream machine learning-based artificial intelligence technology is based on probability, and certain deviations and errors are inevitable. However, the types and degrees of risks that may be caused by artificial intelligence errors in different application fields are different, and artificial intelligence and risk prevention and control measures are still in dynamic development. Therefore, establishing graded and classified, reasonable safe harbor rules for the artificial intelligence industry is of positive significance for stabilizing industry expectations, promoting innovation investment, and promoting enterprises to take reasonable measures. However, deviating from the objective laws of AI development and implementing overly strict regulations will inhibit innovation and affect the construction of AI security capabilities in society. Of course, overly loose liability exemption rules may also make it difficult to promote the "intelligent good" practices of AI companies themselves. Therefore, when establishing safe harbor rules, it is necessary to combine the communication and cooperation mechanism of multiple subjects, establish a duty of care that matches the level of technological development, and achieve appropriate leniency and strictness.
4.5 Create a dynamic regulatory mechanism that is agile, interactive and encourages positive development
Artificial intelligence is in a stage of rapid development. In order to build effective regulatory capabilities, a dynamic regulatory mechanism with hierarchical classification, agile interaction, and incentives for the good development of artificial intelligence can be created under the concept of inclusive prudence and agile governance. Ayers and Braithwaite proposed the "enforcement pyramid" model in responsive regulation, dividing regulatory tools into different levels of intervention intensity from weak to strong, and suggested that regulators give priority to a light intervention model and adjust it according to the specific circumstances of the regulated objects. Artificial intelligence governance can expand this model, construct a risk management pyramid, a regulatory punishment pyramid, and a value alignment incentive pyramid, and dynamically adjust between different levels of risk management requirements, punishment measures, and value alignment incentive measures based on dynamic information in practice.
First, establish a risk management pyramid with scientific grading and classification that supports dynamic adjustment. In the practice of artificial intelligence governance, the idea of the pyramid model can first be transformed into a hierarchical and classified risk management requirement, that is, the artificial intelligence risk management measures required to be taken should be adapted to the risk level and category to ensure that targeted and effective management measures are taken for high-risk artificial intelligence systems, and to reduce the burden on a large number of artificial intelligence applications with low risks.
The risk management pyramid model has been projected into the laws, regulations and public policies of many countries and regions. The EU Artificial Intelligence Law divides the risks of artificial intelligence systems into four levels: unacceptable risk, high risk, limited risk and minimal risk, and divides general artificial intelligence models into two levels: general models and models with systemic risks, and stipulates different requirements in a hierarchical and classified manner. my country's relevant laws, regulations and departmental regulations also reflect the legislative thinking of hierarchical and classified risk management requirements. For example, Article 23 of the "Regulations on the Management of Algorithm Recommendation of Internet Information Services" clearly stipulates that algorithm recommendation service providers shall be subject to hierarchical and classified management, and the management model of deep synthesis and generative artificial intelligence shall be continued. In January 2021, the National Information Security Standardization Technical Committee issued the "Network Security Standard Practice Guide - Artificial Intelligence Ethical Security Risk Prevention Guide", which classified the ethical security risks of artificial intelligence, including "out-of-control risk", "social risk", "infringement risk", "discrimination risk" and "responsibility risk". Although the guideline does not divide the risk level, its risk level can be roughly understood from its classification terms. Chinese scholars have also proposed in their suggestions for artificial intelligence legislation that artificial intelligence systems should be based on their risk level or criticality, and establish corresponding management norms and regulatory measures.
On the basis of establishing the basic framework of hierarchical and classified management, it is particularly important to note that artificial intelligence is still in a stage of rapid development, and information on risks and effective governance measures is insufficient. Information should be obtained in a timely manner in combination with the aforementioned multi-subject co-governance mechanism, and a mechanism should be set up to adjust the risk level agilely based on the risk assessment situation in accordance with the law. At the same time, guidance on risk management measures should be provided to enterprises, and reasonable construction time should be reserved.
Second, establish a progressively applicable regulatory punishment pyramid. my country's "Cybersecurity Law", "Data Security Law", "Personal Information Protection Law" and other laws have established regulatory measures of different intensities, from ordering rectification and giving warnings to confiscating illegal gains, imposing fines, ordering suspension of related business, suspension of business for rectification, revoking related business licenses or revoking business licenses, and if a crime is constituted, criminal liability shall be pursued in accordance with the law. Continuing this model, the field of artificial intelligence can establish a regulatory punishment pyramid with progressive punishment intensity, and adopt proportional and lightweight intervention-first punishment measures according to the specific risk items and the progressive application strategy.
Third, establish an effective interactive value alignment incentive pyramid. The AI governance system should also build a "value alignment incentive pyramid" associated with enterprises' promotion of intelligent good practices, and use the rule of law mechanism to encourage enterprises to actively carry out AI technology research and development that is aligned with social values, and continuously improve important indicators such as AI security, accuracy, robustness, explainability, fairness, inclusiveness, and privacy protection, thereby enhancing the security and trustworthiness of enterprises and the entire society in the era of AI. For enterprises that actively innovate and develop AI technologies that are aligned with social values, incentive systems such as administrative rewards, joint credit incentives, credit preferences, or tax incentives can be adopted. This will make up for the lack of incentives for investment in technology research and development that supports AI value alignment, security, and trustworthiness in the traditional regulatory mechanism, and effectively promote intelligence for good.
4.6 Build social security mechanisms such as AI insurance
The construction of social security capacity and mechanism is an important aspect of empowering AI governance. The insurance system is an important tool in modern risk governance. It can not only provide compensation after disasters, but also play a role in reducing and controlling risks in advance. First, insurance is agile and adjustable. Especially in the face of new technologies, compared with the tort liability system, insurance can collect data and conduct assessments more quickly, and can be more agile and flexible to continuously update and adjust according to the development of new technologies and their safety measures. Secondly, in the early stages of new technology development, the law usually needs to leave necessary space for development, such as setting clear safe harbor rules for AI. At this time, insurance can fill the gaps left by tort law and provide monetary relief for those whose interests are damaged. Finally, insurance can send signals, incentivize and guide relevant entities to adjust their behavior through a series of measures such as setting insurance coverage limits and exceptions, conducting insurance underwriting and providing information, and adjusting premiums, for example, to take more effective risk management measures, thereby mitigating the risks associated with emerging AI technologies, while providing more security for AI companies and AI users. This will allow different stakeholders to continue to unleash the power of AI and its value to society.
From an international perspective, the scientific and technological community and the legal community have discussed for many years the issue of establishing an insurance system for high-risk artificial intelligence products such as robots and autonomous driving. Local laws or regulations on autonomous driving issued by some places in my country also clearly stipulate mandatory insurance purchase requirements. At the same time, my country is also actively exploring a cybersecurity insurance system, which can not only provide an insurance mechanism for the protection of network and data security risks of artificial intelligence systems, but also provide a reference for more specialized artificial intelligence insurance.
It can be seen that the insurance system of artificial intelligence systems has received a certain degree of attention in practice, but the specific insurance system construction schemes are different, and there are many problems such as moral risks, lack of global guidelines, disputes over purchasing entities, and difficulties in estimating premiums. In order to achieve the goal of effectively enabling trusted development, further research and exploration of appropriate artificial intelligence insurance system construction schemes should be conducted. Pilot projects can be carried out first in high-risk artificial intelligence fields such as autonomous driving, and the capacity building of artificial intelligence security assessment, monitoring, emergency response, relief, etc. can be steadily promoted through the insurance system, so as to enhance the social governance capacity of artificial intelligence.
At the same time, it is necessary to promote the construction of people's livelihood guarantees such as general compulsory education, vocational skills training, and employment promotion mechanisms related to artificial intelligence, eliminate the digital divide, enable the public to benefit equally from the development of artificial intelligence, and lay a good foundation for social supervision and scientific application of artificial intelligence technology.
5 Conclusion
Human society is rapidly entering an intelligent era marked by a new generation of artificial intelligence. Artificial intelligence is expected to empower thousands of industries, but its current development also requires governance empowerment. The development stage of artificial intelligence, the opportunities and challenges of artificial intelligence development, the basic characteristics of artificial intelligence risks, the existing defects in artificial intelligence governance capabilities, the limitations of previous regulatory models, etc., all point to "empowering artificial intelligence governance". How to create a good legal and humanistic environment conducive to the innovative development of artificial intelligence, ensure the scientific research and development, safe application, and good development of artificial intelligence technology, enhance the reasonable expectations of artificial intelligence companies, enhance regulatory capabilities, and enhance social trust in artificial intelligence applications, and thus effectively promote the safe and reliable development of artificial intelligence, is an important question that needs to be answered. The safe and reliable development of artificial intelligence should be enabled through effective governance in accordance with the law, and ultimately empower society and people. We need to establish a new concept of empowering artificial intelligence governance, improve a new mechanism for empowering artificial intelligence governance with the rule of law as the core, move towards a new pattern of empowering artificial intelligence governance, organically combine scientific and technological development with the modernization of the national governance system and governance capabilities, serve the construction of Digital China and the process of Chinese-style modernization, and contribute to global artificial intelligence governance.
This article was originally published in the 5th issue of China Legal Studies in 2024. Thanks to the WeChat public account "China Legal Studies" for authorization to reprint.
Assistant Editor: Chen Yixuan
Editor-in-Chief: Tan Baijun
Reviewer: Ji Weidong