[author]Lin Huanmin
[content]
Systematic construction of ethical review system for artificial intelligence technology
*Author Lin Huanmin
Associate Professor of Koguan School of Law, Shanghai Jiao Tong University
Abstract: The core issue of artificial intelligence legislation is what path to adopt to regulate artificial intelligence activities. The risk management approach faces difficulties in risk assessment and classification, as well as allowing damage to occur, which is not a natural choice for artificial intelligence legislation. Unlike previous technological activities, artificial intelligence activities belong to both specialized and empowering technological activities. The artificial intelligence law that regulates artificial intelligence activities should not be guided by a single theory, but should follow the dual positioning of technology law and application law. The "Artificial Intelligence Law" under the positioning of the Science and Technology Law should respect technological autonomy, internalize technological ethics in artificial intelligence research and development activities, break down institutional barriers, design promotional rules, and help promote the development of artificial intelligence technology. The "Artificial Intelligence Law" under the positioning of application law should pay attention to the phenomenon of functional alienation caused by technology empowerment scenarios. On the one hand, it should use abstract rights and obligations tools, especially by stipulating new rights and constructing a flexible normative framework, to respond to the differences in value sequences in different application scenarios; On the other hand, experimental governance should be promoted, and regulatory strategies should be dynamically adjusted through the design of regulatory sandboxes, authorized legislation, etc., to meet the flexible governance needs of AI enabled application activities.
Keywords: Artificial Intelligence, Legislation, Specialization, Technology Empowerment, Technology Ethics, Experimentalism, Governance
1. Introduction
With the development and application of artificial intelligence technology, artificial intelligence governance is receiving high attention from the international community. The legislative body is no longer satisfied with ethical declaration style industry autonomy, but has turned to enacting specialized laws, thus ushering in the era of artificial intelligence legislation from "soft constraints" to "hard rules". The European Union was the first to introduce a comprehensive law regulating artificial intelligence activities, the Artificial Intelligence Act (Proposal), in 2021. After revision, the proposal was passed by the European Parliament in March 2024, marking the birth of the world's first comprehensive law regulating artificial intelligence activities in the EU. The United States has not yet implemented similar horizontal centralized legislation at the federal level, but a landmark presidential executive order was issued in October 2023, requiring various departments and agencies of the federal government to develop policies for regulating artificial intelligence activities. In China, the State Council has included artificial intelligence legislation in its legislative work plan for two consecutive years since 2023; The Legislative Plan of the 14th Standing Committee of the National People's Congress issued by the Standing Committee of the National People's Congress in September 2023 also included AI legislation in the first category of projects (the draft laws proposed to be reviewed during the term of office with relatively mature conditions), and pointed out that "promoting scientific and technological innovation and the healthy development of AI... requiring the formulation, amendment, repeal and interpretation of relevant laws, or the need for the National People's Congress and its Standing Committee to make relevant decisions, timely arrangements for review". It can be predicted that once the preparatory work is completed, China's AI legislation will speed up. The controversial issues surrounding artificial intelligence legislation include but are not limited to the following five points: the regulatory object of artificial intelligence law, the basic path of artificial intelligence legislation, emerging rights in the era of artificial intelligence, regulatory agencies, and legal responsibilities. Among them, the issue of regulating the approach to artificial intelligence activities involves value judgments and governance concepts, reflecting legislators' response to unknown ethical views. In terms of the basic path of legislation, the extremely cautious attitude is that if risks cannot be prevented, the application of new technologies will not be allowed, but this plan is clearly too conservative. On the other hand, the risk management approach emphasizes embracing risks, tolerating errors, and is more friendly to technology research and application. The EU's Artificial Intelligence Law explicitly adopts a risk management approach to regulate artificial intelligence activities. Article 14 of the preamble of the law states that the law follows a "risk-based approach", which determines the type and content of rules based on the intensity and scope of risks that artificial intelligence systems may generate. The Bletchley Declaration, the world's first international statement on artificial intelligence reached in the UK, also emphasizes the need for a "risk-based policy" to adjust AI activities. China has also signed the Bletchley Declaration, accepting a risk-based approach to regulating artificial intelligence. However, the risk management approach may not be reasonable. The author believes that if China simply follows the model of other countries to legislate on artificial intelligence, it will not only fail to contribute intelligent achievements to the world's rule of law, but also miss the opportunity to lead the global governance of artificial intelligence. Therefore, studying the basic path of legislation on artificial intelligence is not only necessary for the development of China's local legal system and the establishment of a landmark in the field of Chinese law, but also a natural action to carry out international cooperation on artificial intelligence, contribute Chinese wisdom, and demonstrate the responsibility of a great power. In view of this, based on the analysis of the advantages and disadvantages of risk management approaches, this article will propose another approach and strategy for AI legislation by distinguishing the dual attributes of AI activities, and strive to create a new governance framework that can meet the dynamic regulatory needs of AI and promote innovative development of AI.
2. Review of the Single Approach to Risk Management in Artificial Intelligence Legislation
Perhaps influenced by the theory of "risk society", people often subconsciously choose risk management paths to adjust emerging technological activities. But the risk management approach is different from the risk society theory, which is a theoretical system that explores the risk characteristics of factors such as family, occupation, knowledge, and external natural environment in postmodern society, while the former is a set of social management methods based on "risk return" analysis. The mode of risk management approach is: firstly, defining damage as risk; Secondly, evaluate behaviors that may trigger risks; Finally, based on cost-benefit analysis, specific risk classifications are assigned and corresponding standards are configured. The risk management approach evaluates the degree of harm caused by risks, the cost of controlling risks, and the benefits that risks can bring, and selects corresponding standardized designs. Among them, treating damage as a risk is the legitimate basis for risk management, risk assessment is the prerequisite for risk management, and risk classification is the key to risk management. However, defining damage as a risk without sufficient justification may seem to underestimate the protection of individual rights and interests; Due to the high complexity of risks arising from artificial intelligence activities, risk assessment and classification of such activities are extremely difficult, and risk management pathways may not be able to effectively adjust artificial intelligence activities.
2.1 The legitimacy issue of risk management
The legitimacy issue is an important reason why the risk management approach of artificial intelligence has been criticized. The risk management approach emphasizes embracing risks, tolerating errors, and is extremely friendly to technology research and application. However, the inclusive and prudent governance approach is based on the premise of not seriously infringing on individual rights and generating significant social risks. To choose a risk management approach in policy, it is necessary to first demonstrate why it is worthwhile to subject individuals and society to significant and unknown harm. In other words, since it is already known that artificial intelligence will pose risks and the risks may cause serious harm, why choose a regulatory path that tolerates harm?
One defense view holds that risk and reward coexist, and although individuals suffer damage, they also enjoy the dividends brought by technological development. For example, smartphones have greatly improved people's quality of life, and most people can no longer live without the companionship of smartphones. However, not everyone is a beneficiary of technology. In the process of technological transformation, there are three main bodies: decision-makers, beneficiaries, and stakeholders. The affected groups are those who cannot participate in policy-making and have difficulty enjoying technological benefits. Under the risk management path, individuals who deviate from the general benchmark, such as the elderly and low intelligence groups, are often forced to bear the adverse consequences of technological development, and their special needs are often overlooked. Another viewpoint holds that in order to benefit the collective or the majority, sometimes it is necessary to make a minority bear losses. Collective interests are often seen as reasons to bear risks, and compared to individuals, groups and society are the focus of risk management approaches. But individual interests and collective interests are difficult to distinguish, and when a large number of individuals become objects of artificial intelligence, collective interests are also being violated. Therefore, the risk management approach may not necessarily be beneficial to the overall situation.
Fortunately, people have gradually realized the drawbacks of risk management paths and are now considering alternative strategies. For example, in terms of environmental protection, the German Environmental Protection Law abandons the previous risk management model and adopts the principle of prevention. The purpose of the precautionary principle is that policy makers should strive to prevent harm from occurring in the face of scientific uncertainty. Even if there is a lack of sufficient causal relationship between damage and behavior, as long as the consequences of the damage may be extremely serious, the government should immediately take regulatory measures. At present, the principle of prevention has been written into numerous environmental protection conventions, replacing the principle of risk as the fundamental principle of environmental protection. In the field of artificial intelligence governance, there are also calls from the industry to strengthen the supervision of artificial intelligence activities and apply licensing systems rather than risk management to specific artificial intelligence activities. In addition, American scholars have emphasized the irreversibility of damage caused by artificial intelligence and the need to build a resilience system to replace the risk management model of artificial intelligence.
2.2 Feasibility issues of risk assessment
Regardless of legitimacy, there are also issues with the feasibility of the risk management approach for artificial intelligence activities at the application level. Risk management is based on effective risk assessment. Risk assessment is essentially a cost-benefit analysis, and Article 3 of the EU Artificial Intelligence Law defines risk as the combination of the probability of damage occurring and the severity of the damage. Risk can only be compared with returns through quantitative analysis to determine acceptable limits. However, quantitative analysis of the risks caused by artificial intelligence activities is extremely difficult, and AI risk assessment may not necessarily become the basis for decision-making.
Firstly, there is a lack of high-quality data and effective models. Firstly, there is a lack of high-quality data as a basis for quantitative evaluation. The basic data required for risk assessment includes the degree of harm caused, probability of occurrence, geographic and temporal distribution of damage (universality), duration of damage (persistence), reversibility of damage, etc. However, there is no corresponding data available for assessing risks before new technologies are put into application. Even for products that have already been put into use, the data provided may not be informative due to various factors such as technological development, geography, and time. Especially, artificial intelligence big language models represented by ChatGPT and Gemini have strong self-learning abilities, and it is difficult to predict the risks brought by constantly evolving artificial intelligence activities based on existing data. In the absence of high-quality data, it is obviously difficult to accurately calculate the probability and degree of harm of risk occurrence, and therefore it is impossible to determine the tolerable limits of artificial intelligence activities using cost-benefit analysis. Secondly, there is currently no effective model for assessing the risks of artificial intelligence. The risk assessment of emerging technologies will be unable to accurately define and measure risks due to the continued use of existing analytical frameworks. Taking chemical materials as an example, the "quantitative structure-activity relationship" (QSAR model) tool can provide a fairly reliable risk estimate for ordinary chemicals. However, the toxicity of nanomaterials is influenced by factors other than their chemical structure, including size, surface area, surface properties, etc. Therefore, the "quantitative structure-activity relationship" cannot effectively analyze the risks of most nanomaterials. For example, the current testing standards and procedures for motor vehicles are not applicable to intelligent connected vehicles, and the Regulations on the Supervision and Administration of Medical Devices and the Measures for the Supervision and Administration of Medical Device Production may not be able to adjust the production and use of artificial intelligence medical assistive devices (such as nursing robots) as expected. It can be seen that if old models are used to evaluate the risks brought by new artificial intelligence activities, the evaluation results will not have reference value. When the evaluation model fails, even if there is valid data, the risk assessment will lack persuasiveness.
Secondly, it is difficult to accurately assess the risks caused by technological overlap. The risks caused by general technology are relatively simple, and the probability of risk occurrence and damage can be easily determined (such as motor vehicle traffic accident risk, power outage risk, etc.). However, the risks caused by artificial intelligence language models are additive, making risk assessment more difficult. The interaction between different activities and events in a complex system increases the risk exponentially, triggering synergistic effects, and the total risk is much greater than the sum of each part. Sometimes two or more faults may not be destructive when viewed in isolation, but once they combine in an unexpected way, they can cause safety devices to fail and lead to major systemic accidents. In the industrial ecosystem of "big model+specific application", there is a complex dependency relationship between the upstream basic big model and the downstream specific application. The large model trained through massive data provides underlying logical support for specific models, which are then professionally optimized and trained to adapt to numerous specific industries and scenarios. In this collaborative application relationship, the risk assessment of upstream large models cannot predict the specific risks generated by downstream applications, and the risk assessment of downstream applications cannot predict the impact of feedback mechanisms on the self-learning ability of upstream large models.
The less reliable the artificial intelligence risk assessment technology, the greater the possibility of external factors interfering with the assessment results. In recent decades, scholars have tested the objectivity of technology risk assessment and found that biases, ethics, policies, social culture, and other factors can all affect the identification and assessment of risks. Among them, the impact of policies on risk assessment results is particularly noteworthy. Wendy Wagner evaluates the impact of external policy environment on risk assessment as a "science hoax", which refers to making policy decisions on the application of technology under the guise of seemingly objective technological assessments. Especially when the risk assessment of artificial intelligence activities encounters technical obstacles, the risk assessment is more susceptible to the influence of external social and political environments. At this point, the logic of wealth production always wins. If risk assessment becomes an implicit policy choice, there is a risk that the risk control valve will fail.
2.3 The issue of coherence in risk classification
As mentioned above, risk classification is the key to risk management. The risk management path strives to classify the risks caused by artificial intelligence activities and configure standards that are commensurate with the risk level. There are currently three main risk classification models internationally: risk attribute classification model, risk content classification model, and risk degree classification model. However, the above models all have drawbacks and cannot consistently classify the types and levels of risks.
2.3.1 Risk attribute classification mode
The risk attribute classification model is a governance model that classifies artificial intelligence activities based on risk attributes. The typical representative of adopting this model is the United States. The "Artificial Intelligence Risk Management Framework" (AI RMF 1.0) launched by the National Institute of Standards and Technology of the US Department of Commerce divides the risks of artificial intelligence activities into technical risks, "socio technical" risks, and guiding principle risks based on different risk attributes. Technical risk refers to the risk that affects the robustness and accuracy of artificial intelligence operation. "Socio technical" risk involves the impact of artificial intelligence on values such as privacy, security, freedom, and fairness. Guiding principle risk is that the application of artificial intelligence may affect the understanding of "good" or "trustworthy" artificial intelligence. The biggest problem of this model is that the risk itself is difficult to be accurately identified. Firstly, technology is bound to make mistakes, and how to allocate errors is itself a social value judgment, and there is no purely technical risk involved. Secondly, it is difficult to distinguish between "socio technical" risks and guiding principle risks, as only in specific application scenarios can "good" or "trustworthy" be judged. Taking deepfake technology as an example: when deepfake technology is used to synthesize fake images and videos, we feel that artificial intelligence is unreliable; But when deepfake technology is used to recreate disappearing art, people are happy to enjoy this application, believing that artificial intelligence is "good" and "trustworthy". The macro evaluation of artificial intelligence technology cannot be separated from the value measurement in specific contexts. In this sense, guiding principle risk is essentially a 'socio technical' risk. The idea of dividing risk types based on risk attributes is not logically clear, and therefore lacks feasibility.
2.3.2 Risk content classification mode
The risk content classification model is based on the consequences of risk realization. The "Practice Guidelines for Cybersecurity Standards - Guidelines for Preventing Ethical Security Risks in Artificial Intelligence" released by the Information Security Standardization Technical Committee of China in 2021 classify the risks of artificial intelligence activities into five categories based on the consequences of risks: "out of control risks", "social risks", "infringement risks", "discriminatory risks", and "liability risks"; The "Innovation Priority Path for Artificial Intelligence Management" report launched by the UK Department for Science, Technology and Innovation in 2023 also categorizes AI risks based on the consequences of risk realization, but divides them into six types: "human rights risks," "security risks," "fairness risks," "privacy risks," "social welfare risks," and "reliability risks. It can be seen that even if the same risk classification method is adopted, the risk classification of China and the UK is not the same. Due to the fact that observers may summarize different types of risks based on different channels of information acquisition and focus of attention, it is difficult to form a persuasive risk classification model for risk content. Moreover, it is inevitable to miss out on the list. If legislators determine the types of risks associated with artificial intelligence based on this model, they may impose self imposed limitations, resulting in some AI activities that should have received sufficient attention being excluded from legal norms.
2.3.3. Risk level classification model
The risk level classification model is currently the most concerned risk classification model. In this mode, legislators classify risk types based on the degree of harm caused by artificial intelligence activities. The EU's Artificial Intelligence Law is a typical representative of this model. The law divides the risks caused by artificial intelligence activities into four categories and sets corresponding rules: AI systems that cause unacceptable risks are prohibited, high-risk AI systems must comply with specific requirements, restrictive risk AI systems are less restricted, and minimum risk AI systems are completely unrestricted. But this model also has obvious limitations. Firstly, there is a lack of clear criteria for assessing the degree of risk. For example, Article 7 (1) of the EU Artificial Intelligence Act requires a combination of "whether it seriously endangers health and safety" and "whether it has a serious adverse impact on fundamental rights, the environment, democracy, and the rule of law" to determine whether artificial intelligence activities are high-risk, but the above standards have great flexibility and ambiguity. The EU attempts to provide relatively clear guidance through the Artificial Intelligence Committee and the Artificial Intelligence Office, but administrative agencies may not have the ability to assess risks and classify them. For example, generative artificial intelligence represented by ChatGPT may generate false information, directly or indirectly harming individual rights, but there is a lot of controversy in the EU about what type of artificial intelligence activities it should be classified as. The first local law regulating artificial intelligence in the United States, the Colorado Artificial Intelligence Act (SB24-205), passed in May 2024, also targets high-risk artificial intelligence systems. According to Article 6-1-1701, Paragraph 9, Item a of the law, any artificial intelligence system that constitutes a "substantial factor in a major decision" is considered a high-risk artificial intelligence system. And this standard may result in all artificial intelligence systems being deemed high-risk. Therefore, Colorado has to provide a large number of exceptions in item b of this provision, but it may also lead to issues such as "one size fits all" and "missing ten thousand", which cannot provide clear guidance for determining whether there is a high risk. Secondly, there is a tension between rigid risk categories and rapidly developing artificial intelligence technology. On the one hand, with the development of artificial intelligence technology, AI applications that were originally considered restrictive or minimal risks may become high-risk or even unacceptable. For example, artificial intelligence assisted teaching was once seen as an important symbol of educational digitization, but with the development of facial recognition and emotion recognition technologies, there is a serious risk of human rights violations in monitoring students' classroom activities. On the other hand, artificial intelligence activities that were originally considered prohibited or high-risk may also be significantly reduced with the development of security technology measures and the degree of application risk. However, due to the lag in regulations, corresponding artificial intelligence activities will be subject to improper restrictions. For example, the risk level caused by autonomous driving continues to decrease as autonomous driving technology matures. The earlier autonomous vehicles are put into real-world scenarios for road testing, the more likely they are to dominate the market. However, if autonomous driving is classified as a high-risk artificial intelligence activity, its road testing location, time, and degree will be strictly limited, ultimately constraining technological development and the iterative upgrading of the automotive industry.
Risk governance, as a modern social governance model, has become a form of worship, casting an almost magical halo over risk assessment and governance. But if we carefully analyze this model, we will find that a single path of risk management is not a good way to adjust advanced technological activities. Firstly, the risk management pathway is only applicable to quantifiable hazards and does not apply to risks caused by artificial intelligence activities. Ignoring unquantifiable hazards is the 'root cause of disasters'. Secondly, the complexity of artificial intelligence activities makes comprehensive and appropriate risk classification and categorization extremely difficult. Finally, the risk management pathway carries the risk of harming individual and collective interests. The EU's adoption of a risk management approach to regulate artificial intelligence activities is a beneficial attempt, but a single theory cannot regulate complex, multi scenario applications of artificial intelligence. Our country's future legislation on artificial intelligence should not be influenced by the "Brussels effect" and should follow the risk management path step by step. Instead, based on a full understanding of the characteristics of artificial intelligence activities, we should seek a comprehensive governance paradigm for artificial intelligence that balances safety and development.
3. A dual legislative path tailored to the characteristics of artificial intelligence activities
Artificial intelligence activities are highly complex, and laws that regulate them are difficult to design according to a single path of risk management. To formulate laws that regulate behavior, it is necessary to first understand the objects being regulated. If legislation is based solely on a subjective judgment, this judgment may not necessarily correspond to the real problem, and it is unknown whether it has sufficient tension, what degree of deviation or omission it reflects. Artificial intelligence activities have dual attributes, and AI legislation should also have a dual legislative positioning.
Artificial intelligence activities consist of artificial intelligence research and development activities and artificial intelligence application activities. The former reflects distinct scientificity, while the latter focuses on empowering applications and therefore has instrumental characteristics. Of course, there is also overlap between artificial intelligence research and application activities. For example, some research and development activities aim to empower applications, and in the application process, artificial intelligence systems can also be trained through data to achieve qualitative breakthroughs in artificial intelligence research and development. In order to clarify the issue more clearly, the following text focuses on discussing the scientific and instrumental nature of artificial intelligence activities within the frameworks of research and application activities.
3.1 The dual nature of artificial intelligence activities
Technology can be roughly divided into two categories: specialized technology and empowering technology. Specialized technology focuses on depth and professionalism, while empowering technology focuses on application and enhancing capabilities. There are fundamental differences between the two in terms of nature, efficacy, and implementation methods. Previous technologies were either specialized or manifested as empowering technologies. Compared to previous technological activities, artificial intelligence activities are more complex, possessing both high professionalism (scientificity) and extensive empowerment (instrumentality), while also belonging to specialized and empowering technological activities.
3.1.1 The scientificity of specialized technological activities
Artificial intelligence belongs to the representational technology in the new generation of technological revolution. Artificial intelligence research is a specialized technological activity aimed at developing AI systems that can self analyze, self summarize, and self correct. In order to achieve this goal, various schools of thought have emerged in computer science, such as symbolism, connectionism, and behaviorism. The emergence of neural network technology has led to significant breakthroughs in artificial intelligence research, such as deep learning algorithms. The large language models represented by ChatGPT have taken a big step forward in the development of artificial intelligence. At present, artificial intelligence has a super strong self-learning ability. On this basis, some scientific research activities directly target super artificial intelligence, striving to present a logical form of artificial intelligence that humans have not yet discovered or cannot achieve, thereby surpassing human intelligence and exploring heights that humans cannot reach. It is obvious that the development of artificial intelligence is essentially a scientific research and belongs to specialized technological activities.
The scientificity of artificial intelligence research and development activities has increased the difficulty of adjusting artificial intelligence activities. Non professionals often cannot truly understand the impact of artificial intelligence as a scientific research. Taking algorithms as one of the three elements of artificial intelligence as an example, writing and even reading AI algorithms is a professional skill. For non professionals, algorithms are another language. Therefore, only with the help of computer scientists' self-criticism of scientific research activities can we understand and regulate artificial intelligence activities. At present, there are also research projects aimed at controlling artificial intelligence, such as Google and Stanford University, which are conducting algorithm visualization research to enhance the interpretability of algorithms. Current research has increased the possibility of interpreting artificial intelligence algorithms for image recognition, but there is still much work to be done. Only by relying on the scientific system itself can we better identify and solve problems.
3.1.2. As a tool for empowering technological activities
When focusing on artificial intelligence activities, people pay more attention to the empowering applications of artificial intelligence. In the past, technology was often considered directional and its impact was mainly domain specific (such as fossil fuels, communication technology, etc.), and even if interdisciplinary cooperation resulted in a certain degree of openness, it did not have universality. Under this understanding, technology is mainly regarded as a tool to promote economic development, and technology law also focuses on the economic significance of technology, adhering to the legislative paradigm of "technology economy". The emergence of artificial intelligence has changed the directionality of technology. Artificial intelligence is a universal technology that empowers various fields and aspects of human life, including scientific research, education, manufacturing, logistics, transportation, justice, administration, advertising, art, and more. By adjusting in specific application scenarios, artificial intelligence can meet diverse needs. Humanoid robots are a typical example of the superposition and combination of artificial intelligence technology and scenes. Humanoid robots present different forms of expression according to different scene requirements, such as companion robots, caregiver robots, and intimate robots. The Guiding Opinions on the Innovation and Development of Humanoid Robots issued by the Ministry of Industry and Information Technology of China point out that the first step is to develop a basic version of the humanoid robot machine, create a "public version" universal platform, and then support structural transformation, algorithm optimization, and specific capability enhancement under different scenario requirements.
As an empowering tool, artificial intelligence activities do not necessarily have rationality in terms of value and may bring new social risks, leading to the alienation of social relationships. Taking humanoid robots as an example, with the support of large language models, humanoid robots will gain powerful manipulation capabilities, which will undermine human subjectivity. Firstly, humanoid robots possess intelligent representations. If humans discover that humanoid robots are more capable than ordinary people, humanoid robots will eventually gain greater influence than ordinary people. When we habitually ask robots for their opinions first, humans are losing their autonomy. An obvious example is that navigation programs such as Amap and Baidu Maps greatly weaken an individual's ability to navigate independently. Secondly, the highly anthropomorphic appearance makes humanoid robots easier to be seen as "of the same kind". Human empathy towards humanoid objects is a natural psychological tendency, and the appearance of humanoid robots can trigger human psychological projection, making it easy to evoke empathy. If companion robots or intimate robots (sex robots) can be almost indistinguishable from reality, humans may intentionally or unintentionally view robots as their own kind and consider them their best companions. This will enable embodied intelligence to further acquire extraordinary manipulation capabilities. Humanoid robots with manipulative abilities may limit individuals' self-determination, interfere with socialization, and even distort family values, leading to new social problems that require active response and regulation from the legal order.
3.2 Artificial intelligence dual legislative positioning under dual attributes
Based on the dual attributes of scientificity and instrumentality of artificial intelligence activities, artificial intelligence laws that regulate them should also have a dual positioning. The research and development of artificial intelligence is a scientific research activity, and the Artificial Intelligence Law, which regulates scientific activities, naturally has the attributes of technology law; The application of artificial intelligence is the combination of technology and scenarios, so the "Artificial Intelligence Law" that regulates scenario based applications should also have the attribute of application law. Therefore, China's future legislation on artificial intelligence should have a dual positioning of technology law and application law.
3.2.1 Artificial Intelligence Law as a Technology Law
The legislation of artificial intelligence guided by technology law should fully respect technological autonomy and pay attention to technological ethics. The typical feature of modern society is functional differentiation, which creates many subsystems to solve social problems. Social subsystems cannot be integrated in one way, but always function according to their respective media. For example, the economic subsystem cannot replace the functions of the education subsystem, and scientific research activities can only be promoted through the technology system. Scientific research activities generally do not want to be subject to too many external restrictions, and the standardization of scientific research activities should first rely on the technology system itself. By using technological ethics to regulate technology itself, this is called the self reflexivity of technological systems. For a long time, people have believed that scientific research should not have forbidden zones, because scientific research itself is a pursuit of truth and relies on a free and exploratory research environment. If a forbidden zone is set in advance, it will undermine the truth-seeking nature of scientific research. But with the rise of modern science, the discovery of knowledge is no longer considered the ultimate goal, and the success of science may be the beginning of harm. The American journal Science pointed out that the issues of science and social responsibility, science and ethics, science and modernity triggered by modern science are attracting people's attention at an unprecedented speed and scale. Science is not entirely beneficial, and the harm of some scientific research even outweighs its academic value. People gradually realize that technology is not civilization, and scientific research activities should also be regulated, giving rise to technological ethics. As a type of technological activity, artificial intelligence activities should naturally comply with the ethics of artificial intelligence technology.
The technological nature of artificial intelligence law also means that while regulating technological activities, we should establish appropriate technology promotion systems. The core elements of artificial intelligence are algorithms, computing power, and data, but artificial intelligence law is not equivalent to artificial intelligence element law. If artificial intelligence legislation is simply divided into legislation on artificial intelligence elements, and general rules are designed around each element, it will not only dissolve the legislative purpose of the Artificial Intelligence Law, but also blur the relationship between the Artificial Intelligence Law and laws such as the Personal Information Protection Law and the Data Security Law. Rules should not be formulated around every element just because the core elements of artificial intelligence are algorithms, computing power, and data. What should be considered instead is what obstacles hinder the development of artificial intelligence, and how China's future Artificial Intelligence Law should appropriately design rules to fully leverage the breaking effect of the system. It is currently clear that the collection and application of data resources are key issues affecting the development of artificial intelligence. Article 18 of the Artificial Intelligence Model Law (Expert's Proposal) issued by the Chinese Academy of Social Sciences and Article 20 of the Artificial Intelligence Law of the People's Republic of China (Scholar's Proposal) jointly drafted by China University of Political Science and Law and other universities all stipulate the data element supply system, emphasizing that the state should support the construction of basic databases and thematic databases in the field of artificial intelligence, promote the efficient convergence, sharing and utilization of data resources, and encourage and guide relevant subjects to carry out collaborative research and development of big data and artificial intelligence technology. In the future legislation of artificial intelligence in our country, data usage rules should be established, and existing personal information protection rules and copyright protection rules should be moderately broken to meet the data needs of artificial intelligence training.
3.2.2 Artificial intelligence as an applied method
The complex and diverse application scenarios of artificial intelligence have increased the difficulty of unified legislation. The legal relationships triggered by artificial intelligence empowering different application scenarios are not the same. Taking facial recognition as an example, if facial recognition technology is used for access control, electronic passports, and automatic payments, the legal interests involved include freedom of travel and property security; But if facial recognition technology is used for general surveillance, it mainly involves the contradiction between freedom and security. The application of automatic decision-making algorithms in different fields of public and private also raises different legal issues. The application of automatic decision-making algorithms in the business field can easily lead to algorithmic black box and discrimination issues. For example, recruiting artificial intelligence may consciously exclude applicants from specific regions and institutionalize regional discrimination. The use of decision-making algorithms by public power will directly affect the basic rights of individuals. For example, automatic administrative algorithms represented by "instant batch" greatly improve administrative efficiency, but also to some extent infringe on citizens' personal freedom and right to know. It can be seen that the value of applying artificial intelligence in both public and private fields varies, and a unified standardized design is not easy.
For a long time, thinkers preferred an essentialist way of thinking, which sought to find the laws of development by defining the "essence of things" and using these laws to regulate things themselves. Unfortunately, efforts to grasp the essence and simplify complexity are often not easy to succeed. Many times, it seems that the object of understanding is clear at a glance on the surface, but in reality, it does not truly reflect the actual situation of complex society, and instead leads to the oversimplification of the object of understanding. Therefore, modern cognitive paradigms increasingly favor non essentialism, which explains and adjusts things based on a comprehensive understanding of their different aspects by revealing their complexity and diversity. Taking economics as an example, complex economic theory points out that classical equilibrium theory is too idealistic and rational, distorting the real world. Modern economics criticizes the excessive simplification of previous economic models, believing that the economy is not deterministic, statically balanced, but dependent on processes, organic, and constantly evolving. The author believes that the EU adopts a single approach to risk management to regulate artificial intelligence activities, believing that risks are deterministic and predictable, which to some extent simplifies and statizes the complex world. A single theory cannot regulate complex and multi scenario artificial intelligence applications. If rules are designed according to a single theory, it is inevitable to encounter problems of overly strict or lax regulations. Given the diverse application scenarios of artificial intelligence, AI enabled application standards should not seek simple, singular regulatory frameworks, but should fully value the complexity of things.
There are currently two versions of proposed artificial intelligence laws in China. The "Artificial Intelligence Demonstration Law 2.0 (Expert Recommendation Draft)" released by the Chinese Academy of Social Sciences stipulates the system for supporting and promoting artificial intelligence, the management system for artificial intelligence, the obligations of artificial intelligence developers and providers, and the comprehensive governance mechanism for artificial intelligence; The Artificial Intelligence Law of the People's Republic of China (Scholars' Proposal) jointly drafted by China University of Political Science and Law and other universities stipulates development and promotion, user rights protection, developer and provider obligation specification, supervision and management, special application scenarios, international cooperation, etc. Regardless of which version, no single risk management path or theoretical model has been adopted to regulate AI empowerment activities. The author believes that the issues involved in different artificial intelligence activities belong to different fields, and the subject structure and interest relationships involved are also different. The application activities empowered by artificial intelligence have highly contextualized characteristics, and no "correct" expected model can be assumed as common knowledge. Therefore, the standardization of AI enabled application activities should not overly pursue theoretical and systematic approaches, but should design governance structures that can meet the needs of scenario based applications and dynamic adjustments, while fully understanding real-world problems.
4. The development of artificial intelligence regulatory framework under the dual legislative approach
Artificial intelligence activities are scientific and empowering, and China's future legislation on artificial intelligence should also be carried out under the dual positioning of technology law and application law. A legislation with a dual positioning undoubtedly tests the wisdom of legislators. A simple and logically clear governance framework, while in line with formal aesthetics, may be inadequate and unable to effectively adjust regulatory targets. As a technology law, the "Artificial Intelligence Law" should focus on internalizing technological ethics into AI research and development activities, while breaking down institutional barriers and promoting the development of AI technology; As an applied law, the Artificial Intelligence Law should pay attention to the phenomenon of functional alienation caused by technological empowerment, and meet diverse regulatory needs through flexible normative configuration.
4.1 Regulations on the Companionship of Artificial Intelligence Research and Development under the Positioning of Technology Law
Artificial intelligence research and development activities are specialized technological activities with scientific rigor. Correspondingly, the regulation of artificial intelligence research and development activities cannot ignore the unique adjustment methods of technological systems. In addition, the Artificial Intelligence Law, which regulates technological activities, should fully utilize the functions of the Technology Promotion Law, eliminate institutional barriers to the development of artificial intelligence, and promote the transformation of artificial intelligence technological achievements.
4.1.1 Ethical Obligation of Technology
Technology is different from nature, where everything is closely interconnected, and technology is the causal closure of operating systems. The inherent closure of technological systems means that only members of the system can fully understand the meaningful communication processes that occur within them. Computer scientists have the best understanding of the dangers in the development of artificial intelligence and are in the best position to control them. As the artificial intelligence law of technology law, attention should be paid to the research and development process, and scientists should be required to abide by technological ethics.
Technological ethics is the self adjustment of scientific research activities by technological systems. Technological ethics can deliver more timely and closely aligned behavioral norms with technological development and business models, respect the laws of artificial intelligence development, promote innovation in artificial intelligence scenarios, and coordinate legislative supervision, enhancing the social adaptability of legislation. The National Professional Committee for the Governance of New Generation Artificial Intelligence issued the "Ethical Norms for New Generation Artificial Intelligence" in 2021, which established four research and development norms: strengthening self-discipline awareness, improving data quality, enhancing security transparency, and avoiding bias and discrimination. It should be noted that ethical principles can only truly come into play when they are concretized and obligated. If specific rules for artificial intelligence activities are not established through legislation, ethical principles of artificial intelligence will be difficult to implement. The legal order should internalize research ethics into the research process through the design and development of accompanying norms. The future artificial intelligence law in our country should establish systems such as recording obligations, reporting obligations, safety management, and ethics committees. Due to space limitations, the following text mainly introduces the recording and reporting obligations that R&D personnel should undertake.
Recording is the foundation for self observation and debugging of technological systems. The risks caused by artificial intelligence activities are unpredictable, and only with the help of scientific records can we understand ongoing scientific research activities to a certain extent. The European Commission's 2020 White Paper on Artificial Intelligence - A European Solution for Excellence and Trust clearly states that artificial intelligence designers and operators should record and preserve the following information: training and testing datasets, including descriptions of key features and methods of selecting datasets; In reasonable circumstances, the dataset itself; Regarding programming and training methods, the process and techniques of building, testing, and validating artificial intelligence systems, including related designs to avoid discrimination. In the era of big language models, artificial intelligence researchers should bear a greater responsibility for recording. Only by recording the scientific research process in an understandable way can we 'control' the uncertainty caused by large models.
The obligation to record does not only exist in laboratories, but should run through the entire lifecycle of artificial intelligence systems. Considering the self-learning and self growth abilities of artificial intelligence, the obligation to record AI activities should have a certain degree of specificity. Compared to products from the industrial era, artificial intelligence technology is more likely to evolve after being applied, as data from real-world scenarios will continuously stimulate the self growth of artificial intelligence. Seed AI based on biomimetic design can continuously improve itself. Seed artificial intelligence first understands its own operating logic through experimentation and trial and error, information collection, and programmer assistance, and then achieves "recursive self improvement" by establishing new algorithms and structures, thereby gaining the ability to self evolve. In a sense, the entire society is a place of evolution for artificial intelligence systems. Article 12 (1) of the EU Artificial Intelligence Law explicitly requires high-risk AI systems to have the ability to automatically record events during operation, while paragraph 2 emphasizes that AI developers need to assume product tracking and recording obligations after putting AI products on the market. As long as artificial intelligence systems have self-learning capabilities, developers should assume a continuous recording obligation to achieve full lifecycle monitoring of AI systems.
The reporting obligation and the recording obligation complement each other and are two sides of the same coin. If errors or defects are found in the artificial intelligence model based on records, R&D personnel should promptly report to regulatory agencies. Article 62 (1) of the EU Artificial Intelligence Act requires providers of high-risk artificial intelligence systems to immediately report serious malfunctions to regulatory authorities, and the reporting time should be within 15 days from the date of discovery of the serious malfunction. When strong artificial intelligence is expected to be realized, researchers should report it in a timely manner, because strong artificial intelligence technology may be the last invention of humanity and will bring great danger to humans as inventors. Of course, some people also point out that the arrival of strong artificial intelligence is still far away, and the doomsday theory of artificial intelligence exaggerates its dangers. Perhaps only time can distinguish right from wrong in this regard. The real question to consider is whether we are willing to accept the possibility of self destruction due to technological changes. When the singularity moment arrives, it should not be solely up to scientists to decide the fate of humanity, but rather for all of humanity, including scientists, to make a joint choice. But in order to give decision-making power to others outside of the developer, it is necessary to enable them to have real-time updates on the development progress. Therefore, artificial intelligence developers should have a major reporting obligation, so that regulatory authorities can intervene in AI research activities in a timely manner and prevent the overall collapse of the system.
At present, the Internet regulatory authorities are trying to strengthen the supervision of AI activities through algorithm filing. However, in comparison, algorithm filing is not as compatible with the scientific nature of specialized technological activities as recording and reporting obligations. Standardizing artificial intelligence activities through algorithm filing does not conform to the regulatory model of the Science and Technology Law. The essence of algorithm filing is a beneficial attempt by the authorities to delve into the technology system and directly supervise the design and operation of artificial intelligence. The aim is to obtain relevant information on algorithm systems with potential hazards and risks that are designed and deployed on the platform, to fix accountability points, and to provide an information foundation for future administrative supervision. However, filing may become a disguised form of approval, improperly interfering with technological research and development. In practice, some regulatory authorities excessively erode the autonomy of the technology system by approving under the guise of filing. The EU's 1995 Data Protection Directive, Articles 18 and 19, stipulated the obligation for personal information controllers and processors to register the use of automated algorithms. However, in the process of formulating the EU's General Data Protection Regulation (GDPR), the European Commission pointed out that the filing obligation has increased the bureaucratic burden on businesses and advocated for the abolition of general filing obligations and the requirement for comprehensive recording of personal information processing activities; Once regulatory agencies require algorithm users to provide records, algorithm users have an obligation to provide corresponding records in a timely and comprehensive manner. The EU General Data Protection Regulation, adopted in 2016, adopted the recommendation of the European Commission to replace the filing obligation in the Personal Information Protection Directive with a recording and reporting obligation in Article 30. In addition, artificial intelligence language models have the ability to continuously evolve, and recording static and temporary algorithms is not enough to help regulatory authorities understand the subsequent evolution of algorithms. A better path is to strengthen the continuous recording and reporting obligations of artificial intelligence developers, and achieve governance goals through the control valves of scientific systems themselves.
4.1.2 Establish a data system to promote the development of artificial intelligence technology
The Artificial Intelligence Law focuses on regulating technological activities and should design a promotion oriented system to support the development of artificial intelligence technology while regulating it. The problem of data utilization is a bottleneck that restricts the development of artificial intelligence. Regardless of the technical level of large-scale model design, the quantity and quality of data training have a decisive impact on AI performance. According to different data sources, data can be divided into personal data and non personal data. At present, the use of personal data in artificial intelligence training can only be achieved on the premise of compliance with the legality basis stipulated in the Personal Information Protection Law, which seriously limits the data training of artificial intelligence; Although the use of non personal data is not subject to personal information protection laws, it also faces problems such as "data silos" and "data monopolies". In addition, the impact of intellectual property systems makes it difficult for artificial intelligence training to gather and utilize high-quality data from commercial data, knowledge platforms such as China National Knowledge Infrastructure, and others.
Firstly, legislation on artificial intelligence can utilize anticipatory consent rules to allow for the reasonable use of personal data for AI training. Article 13, Paragraph 1 of the Personal Information Protection Law stipulates seven legal bases for processing personal information, among which "personal consent" is the core rule for processing personal information. According to Article 14, consent must be voluntarily and explicitly given by individuals with full knowledge. However, on the one hand, artificial intelligence training requires massive amounts of personal data and cannot obtain individual consent in advance; On the other hand, consent mechanisms may also become a burden for individuals in the era of artificial intelligence - personal information subjects may be forced to face problems of information overload and frequent decision-making without fully understanding AI activities, making it difficult to make truly valuable decisions. In the future, China's artificial intelligence legislation can attempt to reduce the burden on individuals while allowing AI systems to collect and process personal information in situations that meet expectations through the provision of "anticipatory consent". As early as 2004, American scholars proposed the "situational integrity theory", which suggests that personal information protection should have different expectations based on different contexts. If the behavior does not meet expectations, even with the individual's explicit consent, it constitutes an infringement of personal information; On the contrary, even without prior explicit consent, personal information processing activities are still legal. For example, when a smart home robot obtains the buyer's consent to process their personal information, it will also process personal information of other members of the buyer's family (such as minors), accidental users (such as visitors), etc. At this time, the data processing behavior of the humanoid robot should be in compliance with expectations. With the help of the "anticipatory consent" rule, the rigidity of the consent rule in the Personal Information Protection Law can be softened to a certain extent, promoting the technological transformation of artificial intelligence through the collection and processing of personal data.
Secondly, legislation on artificial intelligence should use data access rights to break down data silos and appropriately alleviate intellectual property restrictions on the use of non personal data by artificial intelligence. Data will not be exhausted and can be repeatedly utilized and shared for different purposes, making data sharing the norm possible. However, under the influence of technological barriers and business models, data concentration on large platforms can ultimately lead to data silos and data monopolies. For personal data, data transfer can also be achieved through the right to data portability stipulated in Article 45 (3) of the Personal Information Protection Law; But there are no clear rules for the circulation of non personal data. Regulatory authorities require "interconnectivity" between large platforms, but there is suspicion of excessive infringement of business freedom; Requesting a strong alliance may further promote data monopoly. The EU Data Law, released in 2022, specifically stipulates the right of access to data, allowing individuals to carry non personal data related to products or services. China's artificial intelligence legislation can refer to regulations on data access rights, expanding the objects of user data transfer from personal data to non personal data, breaking down data silos with the help of individual power, and promoting data aggregation. In addition, copyright will also pose challenges to the aggregation and fusion of artificial intelligence training data. Open websites are an important source of data for artificial intelligence training. However, the text, images, sounds, and other content on open websites may already be protected by the Copyright Law. For example, the text or Q&A published on Weibo, Zhihu, and the music, pictures, and videos uploaded by users on WeChat, Xiaohongshu, and Tiktok may be protected by copyright after reaching the threshold of originality. Once artificial intelligence companies utilize this content data, it may constitute copyright infringement. In order to encourage the development of artificial intelligence technology, Japanese government officials have stated that Japanese law will not protect the copyright of raw materials used in the centralized use of artificial intelligence. But the author believes that allowing artificial intelligence to use other people's works for training is too extreme. A more appropriate approach is to draw on the provisions of Article 4 of the EU's 2019 Single Digital Market Copyright Directive. On the one hand, it stipulates that the use of works for the purpose of artificial intelligence training constitutes an exception to copyright protection. On the other hand, it allows copyright owners to explicitly oppose the use of their works by artificial intelligence systems for training in a machine-readable manner. Considering that artificial intelligence will use a massive amount of works for training, China's future Artificial Intelligence Law can also draw on the provisions of Article 53 (1) (c) of the European Union's Artificial Intelligence Law, requiring relevant entities to formulate specialized policies that comply with copyright law and assist copyright owners in exercising their right to object.
4.2 Regulatory system empowered by artificial intelligence under the positioning of applied law
The main purpose of scientific research is to discover knowledge, and controlling danger is only a secondary factor considered, while the main purpose naturally weakens the secondary factors. For example, the main purpose of a company is profit, and the formation of corporate culture is only a secondary pursuit of the company. When the main purpose conflicts with the secondary purpose, the company will give up building or changing its corporate culture for profit. Similarly, technological systems may neglect values such as rights protection and social justice in order to promote technological progress. Therefore, as a powerful tool for society to respond to new challenges, it is necessary for the law to intervene more actively, and legislation on artificial intelligence is therefore urgent. If people see the legal system as a way to respond to crises, then the legal system is the immune system of the entire society. But immunity is not always beneficial, sometimes it can only increase worries. Legal norms may hinder the development of the artificial intelligence industry due to their rigidity. Therefore, the regulation of artificial intelligence applications requires both the assistance of legal systems and the necessary interfaces to promote experimental governance, constructing a flexible regulatory framework that empowers artificial intelligence.
4.2.1 Standardize the rights and obligations mechanism for multi scenario applications of artificial intelligence
Legitimacy and illegality are the basic symbols of the legal system, but the application of artificial intelligence presents a list of phenomena from scene to scene, which increases the difficulty of designing unified rules. Given that the legal system relies on abstract rights and obligations to adjust complex life, artificial intelligence legislation should focus on identifying and summarizing the types of rights and obligations of relevant entities.
4.2.1.1 The rights that artificial intelligence systems should have relative to humans
In order to defend the central position of humans, artificial intelligence systems should enjoy a series of protective rights relative to humans. When drafting the General Data Protection Regulation, the European Union clearly stated that adopting a risk-based regulatory approach cannot fully support the legal framework for data protection in the EU. Regardless of the degree of risk generated during data processing, the law should grant rights to data subjects. Compared to personal information protection, the issues involved in artificial intelligence are more complex and scenario based, so the role of private rights should be given more attention.
Firstly, the right to know of artificial intelligence systems should be protected by law. The right to know is a prerequisite and foundation for establishing human centered good governance of artificial intelligence. Regardless of the context, the right to know of individuals should be respected. If individuals are unsure whether they have entered the artificial intelligence ecosystem, it is difficult to develop awareness of defending individual rights. The counterpart of artificial intelligence should have the right to know whether their information is being processed by artificial intelligence systems, and should have the right to know the preset functions, limitations, adverse effects, and other information of artificial intelligence systems. When artificial intelligence systems have a certain degree of manipulability, the system provider should inform the counterpart of this situation to make them aware of the possibility of their free will being distorted. Even therapeutic medical artificial intelligence should inform patients and respect their autonomy before repairing psychological trauma (Article 1219 of the Civil Code).
Secondly, whether an individual has the right to request an explanation, the right to refuse an application, and other rights after obtaining relevant information should depend on the context. Firstly, different types of artificial intelligence systems correspond to different individual rights. The artificial intelligence activities in application scenarios can be roughly divided into two categories: decision-making and auxiliary. Decision making artificial intelligence can replace humans to make quick judgments and decisions, such as credit rating (loan or credit card issuance), epidemic prevention and health codes, "instant approval", tax deduction systems, etc. Assistive artificial intelligence only provides auxiliary support functions, such as personalized recommendations, medical image processing, intelligent customer service robots, etc. Decision making and assistive artificial intelligence systems have different impacts on stakeholders in different application scenarios, and the allocation of rights should also be different. Decision making artificial intelligence systems directly affect the legal interests of the counterparty, who should have the right to request an explanation of how the decision was made; Assistive artificial intelligence systems do not directly make decisions that affect the interests of stakeholders, but may subtly influence their thinking patterns. Therefore, stakeholders should have the right to refuse the application of the system in advance. Therefore, the individual's right to request interpretation should mainly target decision-making artificial intelligence systems, while the right to refuse applications should mainly target auxiliary artificial intelligence systems. Secondly, whether it is interpreting the right of request or applying the right of refusal, the applicable space of the right should be judged separately in public and private application scenarios. Taking the explanation of the right to request as an example, if public power uses artificial intelligence systems for administrative or judicial actions, the counterparty should have the right to request an explanation of the specific decisions made by the artificial intelligence system; If commercial institutions use artificial intelligence systems for commercial activities, whether the counterparty has the right to request an explanation should be judged based on factors such as the possibility of explanation and the protection of trade secrets. When public power uses artificial intelligence systems to make decisions, if the decisions cannot be explained, the legitimacy and legality of public behavior will be questioned. Compared to improving administrative and judicial efficiency, the value level of protecting citizens' basic rights is higher. In contrast, the logic, parameters, feature weights, and classifications of commercial artificial intelligence systems are the trade secrets of enterprises and may not necessarily be subject to individual algorithmic interpretation requests. In its 2014 judgment, the Federal Supreme Court of Germany explicitly stated that data subjects are not allowed to request disclosure of information such as weights, scoring formulas, statistical values, and reference groups, as this violates a company's trade secrets, and disclosing a company's trade secrets is clearly not in line with legislative purposes. When an individual requests an explanation of the decision made by a commercial automated decision-making algorithm, how to coordinate the algorithm interpretation request with the legal requirements for protecting trade secrets will be a difficult point in judicial trials.
Finally, China's future Artificial Intelligence Law should also specifically stipulate the right to request human communication. Only by ensuring that individuals have the right to express their opinions and receive human intervention, can they avoid becoming objects of machines. Article 22 (3) of the EU General Data Protection Regulation specifically provides for the right of algorithm counterparties to request manual intervention. In comparison, Article 24 (3) of China's Personal Information Protection Law only stipulates the right to request explanation of algorithms and refuse algorithm decisions, and does not provide for the right to manual communication. In the era of artificial intelligence, the right of affected individuals to engage in meaningful communication with the people behind machines is a fundamental requirement for defending human subjectivity. The future legislation on artificial intelligence in our country should specifically stipulate that artificial intelligence systems have the right to communicate with humans.
4.2.1.2 Different obligations of providers and users of artificial intelligence systems
The obligation subjects related to artificial intelligence systems can be roughly divided into two categories: providers and users. Due to their different impacts on the AI system as an empowering tool, the obligations they should undertake should also be different. Artificial intelligence system providers are responsible for designing and developing products and putting them into the market, and should assume obligations such as information disclosure and manual supervision; Artificial intelligence system users, on the other hand, should bear the obligation to use with caution and keep logs as they can affect specific output results.
Firstly, whether it is a decision-making or auxiliary artificial intelligence system, and regardless of the application scenario, AI system providers should generally assume specific legal obligations. These obligations include but are not limited to information disclosure, manual supervision, and ensuring system stability. Firstly, providers of artificial intelligence systems should assume the obligation of information disclosure, so that downstream partners and users can timely understand the relevant information of the system. This information mainly includes the identity and contact information of the provider, the functions and limitations of the artificial intelligence system, test results, possible adverse effects on the target audience, and how to maintain the system. Artificial intelligence system providers should inform the relevant parties of the above information through explanatory documents (technical files). Secondly, the design of artificial intelligence systems should have appropriate human-computer interaction interfaces for effective supervision of the systems. The artificial supervision mechanism aims to promptly detect functional abnormalities and performance mutations in artificial intelligence systems, correct automation deviations, and manually control or reverse output results. When significant risks arise, AI system providers should have the obligation to "shut down" the AI system with just one click to avoid irreversible consequences. Again, AI system providers should maintain the stability and security of AI systems, especially ensuring that AI systems have self recovery capabilities. Malicious third parties may attempt to alter the operational mode, performance, and output results of artificial intelligence systems through "dirty data", system vulnerabilities, and other means. Artificial intelligence system providers can provide targeted technical solutions such as backups, patches, and security solutions to protect network security. Finally, as a signatory to the United Nations Convention on the Rights of Persons with Disabilities (UNCRPD), China has an obligation to ensure that people with disabilities have equal access to artificial intelligence systems. In the era of artificial intelligence, vulnerable groups also have the right to use artificial intelligence technology without restrictions. Artificial intelligence service providers should consider the needs of vulnerable groups during system design and provide necessary measures to prevent them from being excluded by technology.
Secondly, users of artificial intelligence systems should assume the obligation to use with caution and keep logs. The providers and users of artificial intelligence systems are not the same entity. Users of artificial intelligence systems directly operate the system, which is in the best position to affect system results and monitor program operation. Therefore, they should bear corresponding legal obligations. Firstly, users of artificial intelligence systems should use them with caution. If user input data may affect system functionality, the legitimacy of the input data should be ensured. For example, generative artificial intelligence provides services in a "user input+machine output" mode, and the nature and value orientation of generated content largely depend on user input instructions. Artificial intelligence system users should comply with the preset functions of the artificial intelligence system and cannot achieve illegal purposes through deceptive means such as vague words. Secondly, if logs are indispensable for diagnosing the operating status and faults of the system, users of artificial intelligence systems have an obligation to save relevant logs. Just considering the issue of storage costs, it is advisable to set a deadline limit for log retention (such as six months). Thirdly, if users of artificial intelligence systems discover that artificial intelligence may cause discrimination, violate human dignity, and other issues in specific scenarios, they should suspend the use of the system and provide feedback to the AI system provider. If there are significant hidden dangers, they should be reported to the national regulatory authorities at the same time to prevent irreversible consequences.
In addition to the general obligations mentioned above, China's future legislation on artificial intelligence can also select important AI scenarios with mature regulatory experience to specifically stipulate the special obligations of AI system providers and users. For example, China's future legislation on artificial intelligence could consider making specific provisions for the use of artificial intelligence systems such as facial recognition by public authorities. On the contrary, if there is still controversy over the obligations of artificial intelligence related entities in some scenarios, it may be appropriate to leave blank spaces and entrust regulatory agencies to issue specialized legal documents for regulation.
4.2.2 Adjust the experimental governance of complex artificial intelligence application activities
The challenges posed by emerging technologies to governance mainly manifest as unknowns and uncertainties. Unknown refers to a problem that decision-makers may not be aware of, which is real and constantly changing; Uncertainty refers to the inability of decision-makers to solve problems and the need for continuous research and improvement of governance solutions. Even if the rights and obligations system is configured based on sufficient research, the complex and multi scenario application activities of artificial intelligence may turn temporarily effective rules into obstacles to technological development in an instant. Therefore, the law often faces the problem of being out of sync with emerging technologies. To solve this problem, China's future Artificial Intelligence Law can consider implementing an experimental governance model and constructing a dynamic mechanism for trial and error correction.
Experimental governance originated from summarizing and refining EU governance policies. Experimental governance generally includes four aspects: general framework objectives, participant discretion, dynamic evaluation based on peer review, and feedback adjustments based on evaluation results. Specifically, the decision-making body establishes an open framework goal after fully listening to the opinions of stakeholders; Authorize relevant agencies to exercise greater discretion and adjust strategies based on specific circumstances; The responsible institution should regularly report on governance performance and assess its suitability through peer evaluation; If the responsible organization has not made good progress, it should propose a reasonable improvement plan based on the evaluation results. In this governance model, goals are variable and established rules do not exist. They are all modified based on evaluation results, and through feedback iteration between temporary goal setting and modification, a cycle is repeated, ultimately finding appropriate solutions to challenges through "common learning". Experimental governance emphasizes the construction of a diverse, open, and interactive governance system, which can compensate for the shortcomings of traditional hierarchical governance models such as subject singularity, system closure, and process unidirectionality, and better respond to the challenges brought by the constantly developing and adjusting artificial intelligence application activities.
The prominent manifestation of experimental governance is a certain degree of vertical decentralization, which enables regulatory agencies to conduct regulatory experiments and accumulate regulatory experience. Under the experimental governance model, legislators design the general objectives of artificial intelligence regulation, grant regulatory agencies discretionary power, regularly evaluate and report governance performance, and adjust regulatory strategies based on recommendations from relevant parties. The regulatory sandbox is a typical design for promoting experimental governance. Regulatory sandbox is a controlled testing environment established by regulatory agencies in accordance with legal regulations, allowing for the development and testing of artificial intelligence systems within a limited time frame. Even if artificial intelligence activities violate current laws and regulations, as long as they meet the requirements of the regulatory sandbox, relevant personnel will not be held accountable. The UK Department for Science, Innovation and Technology has pointed out that regulatory sandboxes can help quickly bring new products and services to market, generating economic and social benefits; Verify the operational status of regulatory frameworks in practice and identify innovation barriers that need to be addressed; And determine the direction of technological and market adjustments that the regulatory framework needs to adapt to. Public authorities, based on certain special considerations, may wish to accelerate the development of the artificial intelligence industry in specific regions or fields. For example, Japan has specifically designated specific regions and scenarios for the layout of the artificial intelligence industry, such as Fukuoka (road transportation), Kansai (wireless), Kyoto (data protection), Tsukuba (security governance and tax regulation), etc. In the future, artificial intelligence legislation in our country should authorize regulatory authorities to set up regulatory sandboxes in specific regions and fields, in order to overcome the drawbacks of "rushing" or "letting go". Regulatory authorities can adjust regulatory sandbox policies appropriately based on the evaluation results, and seek appropriate governance measures through exploration. In addition, legislation can also promote experimental governance by authorizing regulatory agencies to issue specialized normative documents. In the field of high-tech, small and medium-sized enterprises are often the main force of innovation. As Kevin Kelly pointed out, the companies that will shine in the field of artificial intelligence in the future are unlikely to be the large enterprises of today, but rather some inconspicuous small company. To enable small and medium-sized enterprises to achieve transformation, additional support should be provided at the institutional level. Just as Article 62 (2) of the Personal Information Protection Law authorizes the national cyberspace administration to issue rules and standards specifically for small personal information processors, China's future artificial intelligence law can also empower regulatory agencies to issue specialized documents to regulate small and medium-sized artificial intelligence enterprises, in order to promote technological innovation. For example, regulatory agencies can grant priority access to artificial intelligence regulatory sandboxes to small and medium-sized artificial intelligence enterprises; Can provide special funds to solve the financing difficulties of small and medium-sized enterprises; The obligation standards can be appropriately lowered, or a certain degree of exemption from liability can be provided on the basis of fully protecting individual rights and interests. Regulatory agencies should regularly evaluate the effectiveness of normative documents and continuously adjust their plans based on peer evaluations to truly meet the demands of small and medium-sized technology enterprises. In addition to the above rules, designs such as sunset rules and intelligent expectation drafts can also be beneficial attempts to implement experimental governance.
5. Conclusion
At present, the world is on the eve of the comprehensive arrival of the Fourth Industrial Revolution. The first and second industrial revolutions were both centered around the enhancement of power, the third industrial revolution was the information revolution, and the fourth industrial revolution was the intelligent revolution, which are both related to and fundamentally different from the third industrial revolution. The Fourth Industrial Revolution is a simulation and enhancement of human intelligence, centered around the development and empowerment of artificial intelligence technology. The future legislation on artificial intelligence in China should inevitably aim to promote the controllable development of artificial intelligence, otherwise China will once again fall behind the global trend.
The EU's Artificial Intelligence Law adopts a risk management approach and forms a regulatory framework with regulation at its core. However, the EU adopts a single approach to risk management to regulate artificial intelligence activities, believing that risks are static, deterministic, and predictable, and do not conform to the complex and dynamically developing world. A more scientific approach should be based on the dual nature of artificial intelligence activities, clarifying the dual positioning of technology law and application law in artificial intelligence law, and maximizing the development of artificial intelligence technology and industry while regulating artificial intelligence activities. The author believes that China's Artificial Intelligence Law, which respects the autonomy of science and technology and emphasizes flexibility and standardization, will become a new paradigm for AI legislation and provide wisdom from the eastern powers for world AI governance.