Location : Home > Resource > Paper > Theoretical Deduction
TONG Yunfeng | ​Getting out of the Collingridge's Dilemma: Dynamic Regulation of Generative Artificial Intelligence Technology
2025-03-05 [author] TONG Yunfeng preview:

[author]TONG Yunfeng

[content]

Getting out of the Collingridge's Dilemma: Dynamic Regulation of Generative Artificial Intelligence Technology



TONG Yunfeng



Abstract: The rapid development of the generative artificial intelligence technology has brought the Collingridge's Dilemma to regulators. On the risk side of the dilemma, the risk of the generative artificial intelligence technology is becoming increasingly concrete, which is manifested as the common risk of the input layer and the hidden layer and the characteristic risk of the output layer, reflecting the necessity of administrative supervision and legal regulation, On the safety side of the dilemma,paying too much attention to the strict supervision of safety will undermine the space of technological innovation and frustrate China's international competitiveness in the field of advanced technology, In order to eliminate the Collingridge's dilemma of the generative artificial intelligence technology in China, a dynamic regulation model should be established. First of all, enterprise autonomy should be strengthened along with the postponement of administrative supervision, with enterprise compliance as the first line of defense to avoid technical risks, and administrative supervision retreating to the second line of defense, Next, the whole process of behavior supervision should be replaced by the compliance effect supervision mechanism,incentivizing and forcing enterprises to implement the compliance plan through the reward and punishment mechanism. Finally, the soft law should be enforced first to guide the improvement of the hard law, laying the foundation for shaping the systematic artificial intelligence code in China through experimental exploration of the soft law.

Keywords: generative artificial intelligence; Collingridge's Dilemma; dynamic regulation model; artificial intelligence code

1. Problem posing


The ChatGPT (short for Generative Pretrained Transformer) launched by OpenAI in the United States has made generative artificial intelligence a technological frontier. The so-called generative artificial intelligence is a technology that can generate new images, text, videos, or other content ("output") based on the user's text prompts ("input"). Generative artificial intelligence has been applied in various fields, including judicial administration, predictive justice, online dispute resolution, and criminal justice (such as "preventive policing"). On February 16, 2024, OpenAI released the Sora big model, which can automatically generate videos through text alone. This is another highly disruptive big model product after the text model ChatGPT and the image model DallE. At the same time, negative news about generative artificial intelligence technology is also frequently reported, such as privacy violations, rampant misinformation, copyright infringement, and the generation of involuntary images.

The reality of the coexistence of innovative value and negative effects of generative artificial intelligence technology has brought the Colin Griech dilemma to regulators. The so-called Collinglidge's Dilemma is a proposition proposed by British philosopher of technology David Collinglidge in his book "Social Control of Technology" (1980). It refers to the idea that if strict regulatory measures are imposed too early on an emerging technology, it will hinder its innovative development, and if left unchecked or regulated slowly, it will lead it out of control. Generative artificial intelligence is a start-up technology, and China is currently in the stage of technological catch-up. China has ambitious goals to break the technological blockade of the Western world in this field. This also means that the Kolmogorov dilemma of generative artificial intelligence technology is more apparent in China at the current stage, manifested in two aspects: on the one hand, if we blindly pursue technological innovation and allow it to develop, we will ignore the risks involved (risk aspect); On the other hand, if we blindly maintain the safety situation and excessively regulate, it will stifle the potential for technological innovation (safety aspect). Faced with this dilemma, China's administrative legislation has created the "Interim Measures for the Management of Generative Artificial Intelligence Services" (to be implemented on August 15, 2023, hereinafter referred to as the "Artificial Intelligence Measures"). How to accurately apply this measure to eliminate the Kolmogorov dilemma is a problem that China needs to face directly at present. In other words, the core issue to be addressed in this article is to find the limit standards for legal regulation of generative artificial intelligence.


2. The risk side of Colin Gridge's dilemma: Typification of Risks in Generative Artificial Intelligence Technology


The risk side of the Colin Glitsch dilemma is that if the innovative value of generative artificial intelligence technology is pursued too much and regulatory oversight is neglected, it can lead to the proliferation or even loss of control of technological risks.


2.1 The correlation between technical operation processes and risk types

Generative artificial intelligence technology originated from machine learning theory. In 1956, Turing described the existence of intelligent reasoning and thinking. In the 1960s, machine learning theory was formed, and machine learning work systems were established in the 1980s and 1990s. The current definition of AI is "a computing system capable of participating in humanoid processes such as learning, adaptation, synthesis, self correction, and using data to complete complex processing tasks. Scientists have developed a series of software programs using the constantly growing capabilities and content of computers, and artificial intelligence can rival humans in generating images, text, or music. Generative artificial intelligence is based on the aforementioned technologies and presents multimodal features (such as speech+text, image+text, video+image+text, image+speech+text, video+speech+text).

Artificial intelligence is actually a combination of advanced algorithms and big data, as well as many technologies that utilize these technologies. Generative artificial intelligence applications are built on the foundation of Large Language Models (LLMs), which can recognize, predict, translate, summarize, and generate language. LLM is a subset of generative artificial intelligence characterized by being "large" and requiring a large amount of data to train models to learn language rules. Data is a key element in the competition of big models, and the high intelligence of generative artificial intelligence is precisely because it receives training from massive amounts of data, so the risks derived from it are all linked to data. The specific operation of generative artificial intelligence presents a process of "input data → calculation data → output information", with three stages corresponding to the three levels of program operation: (1) the input layer is where data is provided to the model, and each node has a memory designed to receive a single element of input data; (2) The hidden layer (prediction) is where most of the processing occurs in the application, and it is called the "hidden layer" because the data processed in it cannot be directly input or output from the model; (3) The output layer ultimately provides the conclusions drawn from the hidden layer to the user. This operational process is closely related to risks, and the risks carried by technology are also phased. This risk is spread throughout the entire lifecycle of generative artificial intelligence, but there are significant differences in the manifestation of risk at different stages.


2.2 Common risks of generative artificial intelligence technology


According to the criteria of stage division, this article describes the risk types of different operational stages of generative artificial intelligence technology as input layer risk, hidden layer risk, and output layer risk. Among them, input layer risk and hidden layer risk are common risks of modern intelligent technology, while output layer risk is a unique risk of generative artificial intelligence technology.

At the input layer, the risk of generative artificial intelligence technology manifests as data leakage. The European Parliament passed the Artificial Intelligence Act on March 13, 2024, which states in Article 3 that "risk" refers to the combination of the likelihood of harm occurring and the severity of the harm. In the input stage, generative artificial intelligence needs to crawl and absorb various types of data, and the risks mainly manifest as infringement of personal information, privacy, and trade secrets. Firstly, personal information rights may be infringed upon. In recent years, in order to protect the rights and interests of personal information, China has legislated and established many new rules and systems. However, the relevant rules do not specifically refer to generative artificial intelligence. Grabbing publicly available information from the Internet is the main data source of the generative AI model. Some technology companies also collect publicly available data to build large databases. Generative artificial intelligence generates effective information content by capturing training data, which may contain sensitive and confidential personal information such as bank account numbers, biometric information, etc. Some high-frequency users' sensitive personal information may become generated content and appear in the user's dialog box. Secondly, citizens' privacy has been violated. Generative artificial intelligence will absorb user preferences, interests, behaviors, and other information, calculate user privacy through algorithm operation, and then become the basis for enterprises to accurately advertise. Thirdly, trade secrets have been leaked. The European Union passed the Data Governance Law in 2022, which introduces the "data altruism" system and encourages companies to donate data for the public interest, in order to form data pools with research value. However, if the data fed into the large model for training contains confidential business information within the company, it may bring catastrophic consequences to the enterprise. Therefore, the China Payment and Clearing Association has issued an initiative on the cautious use of tools such as ChatGPT by payment industry practitioners, reminding companies and practitioners in the industry to use generative artificial intelligence with caution. At the same time, the leakage of confidential chips by South Korean company Samsung once again proves that this risk has moved from a hidden danger to a reality. In general, the input stage may pose a risk of data leakage, which is not limited to personal information, privacy, and trade secrets, and even state secret data may be leaked.

In the hidden layer, the risk of generative artificial intelligence manifests as algorithm abuse. The risk of hidden layers is also the risk of internal algorithm operation, and the algorithm operation process has secrecy and opacity. The opacity of algorithms can lead to algorithm black boxes, and non professionals need to rely on algorithm explanations to understand the mysteries of algorithms. Without effective supervision and accountability, there is a risk of algorithm abuse for hidden layer algorithms. This requires embedding ethical code into generative artificial intelligence systems, translating norms such as digital justice, data ethics, and data rights into code and embedding regulatory processes for algorithm operation. Generative artificial intelligence based on machine learning algorithms naturally suffers from justice neglect and equality blind spots due to the lack of human subjective empathy. Manifested as unfairness, injustice, or immorality in algorithms, if the algorithm itself has problems, it is easy to generate toxic information that does not conform to mainstream values, which will solidify social prejudice and discrimination. The digital injustice and unfairness brought about by algorithmic discrimination are technical facts and common problems that are difficult to reverse in the application of artificial intelligence, causing the moral situation in the real world to be transferred to the digital world without reservation. In addition, algorithms are easily manipulated by humans, mainly manifested as information cocoons, induced addiction, algorithm exploitation, algorithm labeling, and algorithm domestication. Algorithmic manipulation behavior makes users slaves to algorithms, suffering from the harm and exploitation of malicious algorithms. In fact, algorithm infringement risk includes algorithm obstruction and algorithm damage, with the former being abstract infringement based on risk or process; The latter is a specific infringement based on the results. The distinction of algorithmic risk actually draws on the division of dangerous and actual harm crimes, as well as the division of behavior and result valuelessness in criminal law.


2.3 The Individual Risks of Generative Artificial Intelligence Technology

In the output stage, generative artificial intelligence provides personalized services to users, mainly by outputting information to meet their needs. However, incorrect or misleading training data can lead to false outputs, absurd responses to inputs, or irrelevant inputs that generate erroneous information. The spread of erroneous information can have serious consequences. Generative artificial intelligence can be used by malicious actors to fabricate events, characters, speech, and news for the purpose of spreading rumors, online fraud, extortion, and illegal propaganda. Therefore, the main manifestation of generative artificial intelligence technology in the output layer is the risk of false information, which is also the individual risk of generative artificial intelligence technology.

There have been multiple cases of false information generated by generative artificial intelligence abroad, such as ChatGPT mistakenly outputting false information about Brian Hood, the county mayor of Herburne in Australia, who was imprisoned for accepting bribes when generating his basic information. For example, ChatGPT fabricated rumors that Jonathan Turley, a law professor at George Washington University in the United States, had sexually harassed female students. False information can also cause property damage, such as incorrect answers provided by artificial intelligence directly causing the stock price of investment company Alphabet to evaporate by over $100 billion. Currently, it is estimated that OpenAI spends approximately $3 million per month on operating ChatGPT, which is about $100000 per day.

There have also been cases of generative artificial intelligence generating false information in our country. On February 16, 2023, a homeowner group in a residential area in Hangzhou discussed ChatGPT. During the group's live broadcast, ChatGPT was asked to write a news release about the cancellation of traffic restrictions in Hangzhou. However, screenshots were taken and forwarded by other uninformed homeowners in the group, leading to the spread of false information.

Faced with the aforementioned issue of false information, generative artificial intelligence companies may develop programs and software to prevent or detect false information, but these technological means are easily breached by updated technological measures. The fact shows that technological measures cannot completely solve the problem of false information in generative artificial intelligence. Technical solutions can only serve as auxiliary means, and it is still necessary to shape a systematic system for solving problems from a legal perspective. In this regard, China's administrative legislation prohibits false information. In addition to the "Artificial Intelligence Measures" rejecting the generation of false information, the "Regulations on the Management of Network Audio and Video Information Services" and the "Regulations on the Management of Deep Synthesis of Internet Information Services" emphasize in relevant provisions that service providers should not use new technologies and new applications to produce, publish and disseminate false news information. In addition, the iterative updates of new generative artificial intelligence products such as Sora may improve the technology to some extent, but they will also increase the risk of false information.

As can be seen from the above, in the face of the typification risks of generative artificial intelligence, regulatory authorities should not blindly pursue technological innovation while neglecting risk prevention and control, otherwise they will fall into the risk side of the Kolmogorov dilemma. Faced with reality, regulators, legislators, and scholars need to jointly propose solutions to overcome difficulties.


3. The safety side of Colin Gridge's dilemma: The Overregulation of Generative Artificial Intelligence Technology


The safety aspect of the Colin Glitsch dilemma is that if regulators blindly pursue the safety value of the use of generative artificial intelligence technology, they will adopt excessive regulatory measures (even bans), which will constrain technological development and hinder technological innovation. While we are still struggling with what regulatory measures to take, new types of generative artificial intelligence technology products from abroad may have already emerged. This means that strict regulatory models may hinder technological innovation for technologies with disruptive innovation value.


3.1 Comparative observation of regulatory models for generative artificial intelligence technology based on data sources and variable settings

Currently, countries are trying to find a good solution to regulate the risks of generative artificial intelligence technology, but existing regulatory schemes can easily lead to two extremes.

On the one hand, the United States adopts a relatively relaxed regulatory approach centered on encouraging innovation. In the United States, AI technology is subject to detailed supervision by specific industries to ensure the proportionality and adaptability of supervision. Although the US government tends to adopt solutions that promote innovation, technological risks have also been widely concerned by both theoretical and practical circles. The power struggle between the two sides has resulted in the United States not yet enacting a systematic and unified artificial intelligence law. The current standard that can directly regulate generative artificial intelligence technology is the "Executive Order on Safe, Reliable, and Trustworthy AI" signed by US President Biden on October 30, 2023. The purpose of this executive order is to ensure that the United States is in a world leading position in grasping the prospects of AI and managing risks. This executive order incorporates the rules from previous executive orders issued by the President of the United States, emphasizing the granting of more autonomy to businesses and the withdrawal of government administrative supervision from the background, including encouraging 15 leading companies to voluntarily commit to promoting the development of safe, reliable, and trustworthy AI. The executive order establishes eight goals, among which "promoting innovation and competition" and "enhancing US leadership overseas" are the two most important goals, highlighting the value trend of encouraging technological innovation in the United States. The United States is a leader in generative artificial intelligence technology, occupying a leading position in advanced technology and often playing the role of the perpetrator in specific cases. In other words, the United States, based on its technological advantages, will become a creator and perpetrator of risks, which determines that as a technology bully, the United States will pursue technological innovation more to gain more international benefits, and will not attach too much importance to preventing and controlling technological alienation to reduce risks and damages to others.

On the other hand, the EU adopts a more stringent regulatory approach with a focus on safety. The EU adopts a strong regulatory model, aiming to establish a common system for strengthening supervision and enforcement at the EU level by formulating a unified AI bill to achieve comprehensive regulation of AI applications. In April 2023, Italy announced the ban on ChatGPT, followed by multiple EU countries following up and communicating compliance issues with OpenAI company. The EU's Artificial Intelligence Act was passed on March 13, 2024, marking a solid step for the EU to legislate and regulate the field of artificial intelligence. The purpose of this regulation is to improve the functioning of the internal market, particularly by establishing a unified legal framework for the development, market launch, service provision, and use of artificial intelligence systems within the Union that conform to the Union's values, promoting the application of human centered and trustworthy artificial intelligence, while ensuring high protection of health, safety, and fundamental rights as stipulated in the EU Charter of Fundamental Rights, including democracy, the rule of law, and environmental protection, preventing harmful effects of artificial intelligence systems within the Union, and supporting innovation. The situation of artificial intelligence. The strong regulatory model imposes relatively strict obligations on providers of generative artificial intelligence services, requiring companies to invest huge costs to complete compliance business. Compared with the United States, the development of artificial intelligence technology in the European Union is relatively lagging behind. Almost all of the artificial intelligence products used by EU consumers are developed and sold by American companies. In specific cases, the EU is often the recipient and victim of artificial intelligence technology risks. For example, in the field of cross-border flow of personal data, EU consumers often suffer from infringement by American companies. The data flow agreement between the EU and the United States has been negotiated multiple times, from the "Safe Harbor Agreement" to the "Privacy Shield Agreement" and even the latest mediation and negotiation, but it is not enough to give EU citizens a sufficient sense of security. In other words, in the field of generative artificial intelligence technology, the EU is in a relatively weak position. In terms of technological innovation, the EU cannot surpass the United States and must constantly guard against the risks and damages brought by American companies. This forces the EU to choose a regulatory strategy that tends towards safety.

Most other countries or regions are following the EU model in regulating generative artificial intelligence technology. For example, during the ASEAN Digital Ministers' Meeting on February 2, 2024, ASEAN released its AI governance framework, the ASEAN Guide on AI Governance and Ethics, which draws on the EU model to classify AI risks into low-risk, medium risk, and high-risk. For example, Thailand released a draft regulation on AI services in 2022. Overall, Thailand's AI regulation draft is very similar to that of the European Union, both adopting a risk-based approach. For example, Brazil has also borrowed from the EU model. On May 12, 2023, the Brazilian Senate reviewed Bill 2338 of 2023, which sets out the operational requirements for Brazilian AI systems, including requiring such systems to undergo initial assessments by suppliers themselves to determine whether they can be classified as "high-risk" or "excessively high-risk". At present, most countries have not yet formulated special laws related to generative artificial intelligence technology. The European Union took the lead in formulating the "Artificial Intelligence Act", which is the world's first comprehensive regulatory law in the field of artificial intelligence. In the field of generative artificial intelligence, the situation in other countries or regions is almost the same as that in the European Union. On the one hand, it is difficult to surpass the United States in technological innovation; On the other hand, they will all suffer from technological bullying and risks from the United States. In other words, the EU Artificial Intelligence Act is a model for defending against technological bullying in the United States. Most countries or regions use the EU Artificial Intelligence Act as a reference blueprint when legislating on artificial intelligence, and ultimately choose a relatively strict technology regulatory model. This logic has been fully reflected in the international legislative practice of personal information.


3.2 Examination of the Strict Regulatory Model of Generative Artificial Intelligence Technology

The current international community's regulation of generative artificial intelligence technology tends to be polarized, and which regulatory model should China adopt is a question that must be answered for the development of intelligent technology in China. China's digital laws such as the Cybersecurity Law, Data Security Law, and Personal Information Protection Law have direct or indirect regulations on the risks of artificial intelligence technology, but these regulations are relatively abstract and lack specificity, and can only serve as value directions for shaping specific rules. As far as the specific specification of generative AI technology is concerned, Article 1 of China's Artificial Intelligence Method stipulates that the method is formulated to "promote the healthy development and standardized application of generative AI, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons and other organizations". The aforementioned legal norms collectively indicate that the purpose of China's legal regulation of artificial intelligence technology is to find a balance between technological innovation and risk avoidance. The purpose of the aforementioned regulations is determined by China's national conditions, and China's situation in the field of generative artificial intelligence technology is not completely the same as that of the United States and the European Union. On the one hand, China will not use its technological advantages to practice technological hegemony like the United States, but it also needs to face the technological bullying and risks arising from it; On the other hand, China's artificial intelligence technology is second only to the United States, and the EU model cannot break the compromise of American technological bullying and prevention of technological risks. However, China has the ability and ambition to catch up with and break the bottleneck of American technology. In other words, China should not take sides between the EU model and the US model, but should choose a model that is more suitable for China's national conditions and can better promote China's efforts to gain international technological discourse power. The basic national conditions and legislative objectives of our country require us to seek a balance between technological innovation and risk avoidance, and it is not advisable to adopt the overly strict technology regulatory model of the European Union.

Firstly, strict regulation will devour the space for technological innovation. Every technological innovation is a practice of "crossing the river by feeling the stones". The mindset of directly eliminating opportunities for risks due to fear of risks does not meet the needs of building an innovative country. Faced with the potential of technological innovation, the country needs to adopt a moderately tolerant attitude. The development of emerging technologies is the exploration of the unknown world by researchers, and the risks carried by the technology cannot be fully understood during the research and development stage. Every technological advancement is the result of developers' "dancing with wolves", and technology users cannot "crazily explore" on the edge of danger. Both users and regulators need to be cautious about emerging technologies. If a high-pressure situation is adopted for the regulation of emerging technologies, technology developers and users will face the "Damocles Sword" like walking on thin ice, fearing their hands and feet, and will be "restless" or full of "worries" in the innovation process. This situation will make technological innovation only a castle in the air and a "utopian fantasy". At the same time, the strict regulatory model will give the public a false impression that "generative artificial intelligence technology is equivalent to a risk gathering place". Consumers or users will refuse to use interactive large-scale model products, which is not conducive to the application and promotion of large-scale model products, and ultimately will cause serious stigmatization of new artificial intelligence technologies. Emerging technologies need to be regulated in innovation, and appropriate freedom should be given to technological innovation. Regulation should not be at the forefront of innovation. Regulators need to maintain moderate tolerance towards emerging technologies with neutrality, as excessive regulation or early legal intervention can seriously hinder the potential for technological innovation.

Secondly, strict regulation is not conducive to breaking the technological blockade in China. Human society entered modern civilization through two industrial revolutions, and now the deepening of the information technology revolution is causing earth shattering changes in human life once again. The new stage of the information technology revolution is the artificial intelligence revolution, and artificial intelligence technology has become a new field of competition in the international community. Our country once missed the first and second industrial revolutions, which led to repeated invasions and humiliations by foreign powers in the late Qing Dynasty. We deeply understand the "law of the jungle" that "if we fall behind, we will be beaten". In other words, China is more aware than any other country of the importance of mastering core technological advantages. In the competition for a new round of artificial intelligence technology revolution, China obviously cannot miss the opportunity again. Otherwise, it will be almost impossible to "overtake on the bend" and break the technological blockade of Western countries. It may even be forced to face a new round of "technological colonization". It should be clarified that the rapid development of generative artificial intelligence technology in the United States, from big language models to multimodal big models and even technological breakthroughs in the Sora model, cannot be separated from the relatively relaxed regulatory policies and tolerant market environment in the United States. Although China should not adopt the overly loose regulatory model of the United States, it certainly cannot accept the overly strict regulatory model of the European Union.

Finally, strict regulation does not meet the needs of building China's independent digital legal knowledge system. If China directly transplants the EU model or the US model, it may constrain technological innovation or face the situation of "technological bullying". More seriously, it may lead to the "colonization" of the legal regulatory system, regulatory model, and academic knowledge system. Based on this, China needs to construct an independent regulatory model, legal system, and academic system based on its national conditions. For example, Chinese scholars have proposed the concept of digital law, and the construction of digital law is to respond to the development needs of China's digital society transformation. Regarding digital law, there are different terms in theory, such as "network law", "data law", "computational law", and "artificial intelligence law". This article believes that the term "digital law" is more appropriate, while other terms reflect a certain aspect of digital law. For example, computational law emphasizes the integration of computational thinking, methods, and techniques into law, expressing the methodology of digital law. At the same time, digital law conforms to the discourse expression of China's top-level institutional design, which helps to achieve the unity of technological discourse and normative discourse, and can cover the entire process of legal research related to technological applications. The continuous updating of generative models reflects the complexity of modern society. We not only need to innovate social governance systems, but also need to leverage digital technology to solve problems of time and space complexity. At the same time, existing legal concepts and standards should also be embedded into AI systems, and law should become an applied philosophy for aligning the values of multi-agent systems. The integration of legal and technological measures is the core essence of digital law. For the positive development of generative artificial intelligence technology, it is necessary to construct a regulatory model, digital legal system, and academic system with Chinese characteristics. In summary, digital jurisprudence, as a new form of narrative discourse, is of great significance for Chinese path to modernization and the reconstruction of the order of the human civilized community.


4. The solution to Colin Gridge's dilemma: Dynamic regulation of generative artificial intelligence technology


The Colin Glitsch dilemma is a common problem in modern digital technology. In terms of generative artificial intelligence technology, it is necessary to base on new backgrounds, variables, and requirements to find the most suitable method for resolving the dilemma. This article argues that generative artificial intelligence technology is innovative and cutting-edge, with rapid iteration and updates, and carries variable risks. Therefore, a dynamic regulatory model should be chosen for it.


4.1 Dynamic regulation of generative artificial intelligence technology

Unlike other digital technologies, the iterative updates of generative artificial intelligence technology are more rapid. Even before people have fully mastered the functions and features of a certain generation of intelligent products, it may have been replaced by the latest generation of intelligent products, from ChatGPT to Sora. The frequency of these updates is very profound. In other words, we cannot simply apply regulatory solutions for existing technologies to generative artificial intelligence technology. We need to establish dynamic regulatory methods that are scenario based and reflect the characteristics of generative artificial intelligence technology.

Based on the volatility of generative artificial intelligence technology, legal regulation should abandon static thinking and move towards a dynamic model. The so-called "dynamic regulation" mainly includes three elements: (1) in terms of the main role, building a basic framework for "post regulation", requiring the government to withdraw its regulatory role and strengthen the role of enterprise self-discipline, fully exerting the compliance role of enterprises to cope with the risks of generative artificial intelligence technology with volatility, and the government only needs to control the overall regulatory situation. (2) In terms of regulatory methods, we will implement a compliance supervision model, replace the entire process of behavior constraint mechanism with a post compliance effect supervision mechanism, give more autonomy to generative artificial intelligence enterprises, and promote the improvement of compliance measures through a combination of rewards and punishments; (3) In terms of legal structure, "soft law" should guide the improvement of "hard law" first. The 'hard law' has stability and lag, and in the face of rapidly iterating generative artificial intelligence technology, the regulatory function of the 'hard law' cannot be effectively exerted. Although 'soft law' has weak enforcement power, it has flexibility and agility, which can guide the compliant development of generative artificial intelligence technology. The experimental role of soft methods can be leveraged to upgrade stable general rules and basic principles into hard methods. In summary, dynamic regulation conforms to the characteristics of generative artificial intelligence technology.

Dynamic regulation is based on specific technological scenarios, effectively integrating technology with government supervision and legal regulation, and enabling deep cooperation between technology and law to jointly address risks and damages. This is a new type of digital legal thinking. This model meets the scenario based needs of rights protection in the digital age, is reasonable and operable, and should be advocated. In contrast, the static thinking of traditional regulation of digital technology attempts to define the legitimacy boundaries of technological behavior with a unified standard. In the context of the rapid development of artificial intelligence technology, this static thinking seems unable to find a unified standard for resolving the Kolmogorov dilemma. Even if a certain standard can be temporarily determined, it will be overturned due to rapid technological updates. So legislators will choose "knee reflex legislation" and "infinite loop legislation", and the new law will quickly disappear. The function of the law will become symbolic and void, seriously affecting the authority and stability of the law. From this, it can be seen that legislators and regulators should not ignore the dynamic development laws of generative artificial intelligence technology. They should base themselves on the technical characteristics of generative artificial intelligence, improve technical standards through scenario based and classified methods, and establish a dynamic regulatory system. In the face of the Colin Griech dilemma of generative artificial intelligence technology, dynamic regulation is a relatively feasible solution.


4.2 Shaping enterprise autonomy as the first line of defense against technological risks

Generative artificial intelligence is deeply influenced by technology designers, and platform enterprises that develop generative artificial intelligence technology and provide intelligent services should be the first responsible persons for risk avoidance. For the risks of generative artificial intelligence technology, priority should be given to strengthening internal compliance within enterprises, and government administrative supervision should only intervene when enterprise self-discipline fails. Restricting the R&D behavior of large model designers with compliance measures is the only way to promote the goodness of generative artificial intelligence technology. The compliance construction of platform enterprises is mainly manifested in fulfilling their data security protection obligations.

Entering the digital society, various digital technology services are provided through online platforms. According to Article 2 (h) of the European Union's Digital Services Act, an "online platform" refers to a hosting service provider that stores and disseminates information to the public at the request of the service recipient. The popularity of generative AI products is inseparable from the role of online platforms, and behind the platforms are Internet enterprises with rich data resources. Intelligent platform enterprises that occupy powerful data resources and possess technological advantages should assume the role of gatekeepers in the process of intelligent services. The compliance function of gatekeepers, as a new governance tool, is a new practice of the public-private partnership governance model in the digital economy era. The ultra large generative artificial intelligence service platform should be legally defined as a "gatekeeper", who plays a decisive role in technology settings, algorithm operation, and risk control. According to the requirements of the controller's obligation theory, generative artificial intelligence service platforms should assume stricter obligations. Article 48 of the EU Data Markets Act also emphasizes that gatekeepers should introduce compliance functions, which are independent of the operational functions of gatekeepers and performed by one or more compliance officers, including the head of the compliance function department. Article 28 of China's Data Security Law stipulates that data processing activities and research and development of new data technologies should be conducive to promoting economic and social development, enhancing people's well-being, and complying with social morality and ethics. This requires large online platform enterprises that occupy the position of gatekeepers to strengthen compliance construction when researching, developing, and applying generative artificial intelligence technology. The technical services they provide must not only be legal and compliant, but also ethical and moral.

The application of generative artificial intelligence technology may bring about issues such as digital divide and digital discrimination, and it is intelligent platform enterprises that can solve these problems in the first place. Intelligent platform enterprises and their internal staff, based on their expertise, are able to identify problems early and propose solutions to address them. For the digital divide, intelligent platforms should strive to provide equal access opportunities and help different groups cross the digital divide by lowering technological barriers and other means. Intelligent platforms can provide online and offline information technology education and training courses for groups such as the elderly, low-income families, and rural residents, improving the digital literacy of the entire population; Intelligent platforms should design and provide more humane and user-friendly digital services and tools, and provide special innovative products and services for special needs groups (such as visually impaired, hearing-impaired, etc.) to ensure that technological innovation achievements can be widely used by user groups. For digital cocoons, intelligent platforms need to optimize algorithm recommendation mechanisms, avoid overly homogeneous content services, encourage diversified information presentation, and expand user perspectives. In the process of generative artificial intelligence services, the presence of professional information inspectors is necessary. Human editors can filter and integrate content to form a more comprehensive push. The intelligent platform should conduct self correction and self-examination of algorithms internally to break the information cocoon phenomenon. For digital discrimination, intelligent platforms should establish fair data collection and processing mechanisms, regularly review algorithms to identify and eliminate potential biases and discrimination; Clearly define the responsible parties for each stage of algorithm development and decision-making, ensuring that algorithm designers, compilers, operators, decision-makers, and others are all accountable for the fairness of the algorithm; Fairness and transparency should be considered during the algorithm design phase to avoid discrimination based on the designer's own values, and to ensure that the algorithm does not rely on incomplete or biased data; Standardize data collection and processing, ensure data diversity and representativeness, and avoid algorithmic discrimination caused by data bias. For digital forgery, intelligent platforms should strengthen content review and use technological means to identify and filter false content such as deepfakes; Develop and deploy artificial intelligence models to identify deepfake content, which detect anomalies and tampering by analyzing the statistical characteristics of video content, facial feature matching, audio, and contextual information. The platform should establish a strict content review mechanism, requiring prominent labeling in the generated or edited information content to inform the public of the deep synthesis situation. For the digital foam, that is, an overly optimistic or unrealistic expectation caused by technical, market or social factors, the intelligent platform should provide clear and accurate information, so that users and investors can understand the operation, financial health and market performance of the platform; Fully safeguard the informed rights and interests of users and investors, and enhance their risk prevention awareness; Avoid artificially raising the price of products or services through unrealistic marketing or exaggerated expressions, and maintain consistency between price and value. Intelligent platform enterprises should continue to increase investment in technological innovation and promote the practical application of technology. For digital obsession, intelligent platforms should advocate the concept of healthy use of digital products and provide tools to help users manage their usage time; Adjust the recommendation algorithm to avoid infinite scrolling and overly personalized recommendations, reducing users from being trapped in endless browsing cycles; Allow users to set daily or weekly usage limits, limit functions or remind users when exceeding the limit; Provide parents with tools to control their children's use of digital software, helping them regulate and restrict their online activities. For digital monopolies, intelligent platform enterprises must not abuse their market dominance and should strengthen the portability, interoperability, and openness of data. Intelligent platform enterprises should enhance the transparency of their algorithms, ensure that algorithm applications are verifiable, interpretable, and accountable, and guarantee fairness among different users and participants. For digital traps, intelligent platforms should establish strict security mechanisms to prevent online fraud and other behaviors, provide clear privacy policies and service terms, and ensure that users fully understand how their data is collected, used, and protected; Regularly push educational content on network security and privacy protection to users to enhance their self-protection awareness; Adopt strict data protection measures, including encryption technology, anonymization processing, and access control, to prevent data leakage and abuse; Establish a sound content review mechanism to monitor and handle harmful content such as false information; Clarify the responsibility attribution in the event of digital traps and provide users with clear channels for complaints and appeals; Continuously investing in research and development, utilizing technologies such as artificial intelligence and machine learning to enhance the ability to monitor and defend against network threats; Regularly conduct self inspections and third-party audits to ensure that business operations comply with relevant laws, regulations, and standards.

In summary, solving the aforementioned specific problems cannot be achieved without the autonomous behavior of intelligent platform enterprises. Intelligent platform enterprises should be required to bear the obligation of solving specific problems in the process of generative artificial intelligence services. If intelligent enterprises have the ability to solve problems but refuse to fulfill their obligations, resulting in serious damage consequences, they should bear legal responsibility. In severe cases, they need to bear criminal responsibility for not committing a crime.


4.3 The regulatory mechanism for compliance effectiveness replaces the whole process behavior constraint model


Enterprise autonomy is the first line of defense against risks, while government administrative supervision is the second line of defense against technological risks. Under dynamic regulation, government departments should first hand over the specific problems brought by generative artificial intelligence technology to enterprise autonomy, while the government only controls macro behavior and final results. The traditional administrative regulatory measures of the government are a full process behavior constraint mechanism, and every behavior of enterprise operation will be constrained by government supervision. This regulatory mechanism may not be conducive to unleashing the innovation potential of generative artificial intelligence enterprises. This article believes that the government should adopt a compliance effectiveness supervision mechanism, which means that the government should no longer conduct pre - or in-process supervision in principle, but assess the compliance effectiveness of enterprises on the basis of requiring compliance autonomy. Enterprises that complete compliance plans should be rewarded, and those that fail to complete compliance plans or cause damage should be punished. In other words, this mechanism is a government post regulatory mechanism that forces companies to continuously improve their compliance construction. The implementation of the regulatory mechanism for compliance effectiveness requires collaboration between enterprises and government regulatory departments.

Firstly, intelligent platform enterprises should submit compliance plans that are in line with their actual situation. The compliance indicators issued by government regulatory departments are based on enterprise compliance plans. Platform enterprises have the autonomy to choose compliance plan schemes when fulfilling compliance obligations, but cannot submit compliance plans with lower requirements for themselves. This requires the government to review enterprise compliance plan schemes. But what the government ultimately wants to achieve is the effect of corporate compliance construction. The government has the power to adjust corporate compliance plans and issue corresponding compliance indicators based on this. The accuracy and completion of compliance plans for intelligent platform enterprises will be an important basis for whether they will assume legal responsibilities and reduce legal liabilities in the future. In fact, government administrative regulation is also a form of legal regulation, but more importantly, it is a legal regulation of administrative procedures.

Secondly, government regulatory agencies should set compliance targets for intelligent platform enterprises. The regulatory mechanism for compliance effectiveness requires the government to use assessment results as the basis, and to supervise intelligent platform enterprises to actively improve and strengthen compliance construction through rewards and punishments, actively fulfill risk prevention obligations, and constrain and guide the behavior of platform users. The basis for government regulatory assessment is the compliance goals set in advance for specific enterprises, and compliance goals are indicators set by the government based on the review of specific enterprise compliance plans. In other words, the government first requires companies to submit compliance plans, and then based on the actual situation of specific companies, raises or lowers the compliance goals of specific companies based on the compliance plan. After the assessment period expires, if the enterprise completes the assessment indicators, the government will provide appropriate incentives; If a company fails to meet the assessment criteria, the government should impose penalties. From this, it can be seen that the regulatory mechanism for compliance effectiveness is a dynamic regulatory mechanism, not a unified regulatory standard, and needs to be tailored to the actual situation of specific enterprises to create compliance indicators. Faced with the risks of generative artificial intelligence technology, a US congressman has proposed a non binding resolution calling for the establishment of a flexible government agency to oversee the development of AI to manage risks and ensure that the benefits of AI are widely distributed, and to design sufficiently flexible governance mechanisms to keep up with constantly changing technology. From this, it can be seen that the regulatory mechanism for compliance effectiveness is a flexible regulatory mechanism that meets the current development needs of generative artificial intelligence technology.

Thirdly, the regulatory mechanism for compliance effectiveness includes risk classification and grading management. The common goal of government regulation and legal regulation is to avoid technological risks, protect users' digital rights, and enhance the credibility of the platform. When exploring the potential of large-scale artificial intelligence models, it is necessary to consider their potential impacts and ensure that the development and use of generative artificial intelligence are beneficial to social progress. The regulation of generative artificial intelligence technology is difficult to separate from its risk categories. The EU Artificial Intelligence Act divides the risks of artificial intelligence technology into different levels, from high to low, namely unacceptable risk, high risk, medium risk, and low risk. For example, Article 9 (8) of the EU Artificial Intelligence Act stipulates that "testing of high-risk artificial intelligence systems should be conducted at any time during the entire development process, as appropriate, and in any case before they are put on the market or provided with services. Testing should be conducted based on predetermined indicators and probability thresholds that are suitable for the intended purpose of high-risk artificial intelligence systems". The EU's hierarchical management approach to risks associated with artificial intelligence is worthy of China's moderate reference, that is, different regulatory schemes should be adopted for different levels of risks. For artificial intelligence enterprises with high risks, the government should require them to submit more specific and actionable compliance plans. Government departments need to conduct irregular compliance audits on such enterprises, reviewing the general characteristics, specific elements, and basic functions of compliance plans, and examining the specific behaviors of enterprise members. In the future, it is necessary to further unify the national standards for platform compliance auditing to avoid excessive interference of government regulatory power. For artificial intelligence enterprises with medium and low risks, the government can adopt relatively relaxed regulatory measures.

Fourthly, government departments can establish evaluation levels to implement compliance effectiveness supervision mechanisms. After assessing the compliance completion level of different intelligent platform enterprises, government departments should determine the compliance score level based on specific compliance situations, which should include different levels such as excellent, good, qualified, and unqualified. For enterprises with excellent and good compliance assessment results, certain rewards or preferential treatment will be given; For enterprises that have passed the compliance assessment, they are required to further implement compliance plans and urge them to strengthen compliance construction with higher standards; For enterprises whose performance evaluation is unsatisfactory, different measures should be taken according to the specific situation, such as ordering them to rectify, interviewing the responsible person, imposing administrative penalties, and requesting delisting; If it constitutes a crime, it shall be transferred to the judicial authorities to investigate the criminal responsibility of the relevant responsible persons and units.


4.4 Soft law takes the lead in improving hard law

In response to the constantly iterating and updating generative artificial intelligence technology and its risks, dynamic regulation theory emphasizes experimental exploration through soft law first, and then upgrades to hard law regulation. The advantages of soft law lie in its dynamism and interactivity, while hard law has mandatory and fixed nature. By coordinating the division of labor between soft law and hard law, a relatively complete artificial intelligence legal system can be gradually formed.

Firstly, it is the basic consensus in Europe and America regarding legislation related to artificial intelligence that soft law should lead the improvement of hard law. From the perspective of the European Union, the current legislation related to artificial intelligence mainly includes the General Data Protection Regulation and the Artificial Intelligence Act. In fact, the EU has been exploring soft law rules for many years before the formal law is enacted. For example, the EU's "Artificial Intelligence Coordination Plan" and "Code of Ethics for Artificial Intelligence" established four ethical cornerstones for artificial intelligence governance earlier: respect for human dignity, prevention of potential harm, ensuring fairness, and enhancing interpretability. Based on this, the EU's General Data Protection Regulation and Artificial Intelligence Act have established a vertical governance framework for artificial intelligence, implementing strict governance in areas such as data protection and privacy, and setting a series of clear obligations for developers and service providers of artificial intelligence systems. As far as the United States is concerned, it has issued soft laws such as the National Robot Plan and the National Strategic Plan for Artificial Intelligence Research and Development. The AI governance model in the United States focuses on innovation leadership, and focuses on maintaining and promoting the innovative development of AI technology. The architecture of AI governance in the United States is decentralized governance. The regulatory concepts in these soft laws directly affect the development of laws such as the California Consumer Privacy Act (CCPA) and the Virginia Consumer Data Protection Act (VCDPA). Although the European Union and the United States have adopted strict and lenient models for artificial intelligence governance, they have jointly adopted a legal improvement path where soft law influences hard law.

Secondly, China has implemented the soft law first model of artificial intelligence governance. On February 29, 2024, the National Network Security Standardization Technical Committee issued the "Basic Requirements for Security of Generative Artificial Intelligence Services" (hereinafter referred to as the "Artificial Intelligence Requirements"), which provides guidance for the compliance construction of generative artificial intelligence technology from the aspects of corpus security requirements, model security requirements, security measures requirements, security assessment requirements, and other requirements. The "Requirements for Artificial Intelligence", as a typical soft law, further refines the applicable rules of generative artificial intelligence technology on the basis of the "Artificial Intelligence Measures", becoming an "experimental field" for exploring China's systematic artificial intelligence law. The method of exploring artificial intelligence governance rules through soft law is actually the "regulatory sandbox" model. "Regulatory sandbox" originated in the UK financial sector and refers to a controlled environment established by public institutions to facilitate the safe development, testing, and verification of new artificial intelligence systems within a limited time before they are put into the market or put into use according to specific plans supervised by regulatory authorities. Article 3 (55) of the EU Artificial Intelligence Act stipulates that an "artificial intelligence regulatory sandbox" refers to a specific and controlled framework established by the competent authority, providing providers or potential providers of artificial intelligence systems with the possibility of developing, training, validating, and testing innovative artificial intelligence systems within a limited time under regulatory supervision according to a sandbox plan. The EU's Artificial Intelligence Act sets many specific rules for AI regulatory sandboxes in the form of a special chapter. Exploring innovative solutions for generative AI governance rules through regulatory sandboxes is the only way to establish an agile governance toolbox and reserve moderate trial and error space for the development of emerging technologies. Although the relatively strict regulatory scheme adopted by the European Union may not meet China's technological development requirements, the regulatory sandbox exploration rules adopted by the EU are worth learning from for China. As an experimental measure to explore regulatory technology risks, the "regulatory sandbox" should be recognized and adopted by China's soft law, in order to facilitate the exploration of regulatory rules that are in line with China's national conditions through an inclusive and prudent attitude, and then upgrade relevant rules to hard law.

Finally, in the future, China needs to establish an Artificial Intelligence Code. The EU Artificial Intelligence Act, as the first systematic law in the field of artificial intelligence, is a manifestation of the upgrade of soft law rules to hard law norms after they have matured. The specific rules established by EU legislation may not meet China's technological development needs due to their excessive strictness, but EU legislative experience can also provide some inspiration for China. China can continuously explore regulatory sandboxes and develop its own artificial intelligence law based on mature AI soft law rules. The discussion on codification in multiple fields in our country is becoming increasingly in-depth, and a step-by-step experience of codification has been formed. Based on this, in the future, China can formulate a systematic "Artificial Intelligence Code" to complete the codification work in the field of artificial intelligence. The Artificial Intelligence Code should include the following content: (1) Establishing rules for rights and obligations. Set up basic rights rules for users, obligation rules for intelligent service platforms, and regulatory authority rules for government departments. (2) Establish a regulatory mechanism for compliance effectiveness. Avoid excessive government regulation and incentivize intelligent platform enterprises to strengthen compliance construction. (3) Set up regulatory sandbox rules. Encourage the exploration of new rules through regulatory sandbox mode and continue to promote the improvement of the Artificial Intelligence Code. (4) Establish organizational structure. On the one hand, it is required that government departments establish specialized departments to regulate artificial intelligence technology; On the other hand, intelligent platform enterprises are required to set up artificial intelligence technology compliance departments or compliance specialists, who are specifically responsible for enterprise intelligent technology compliance work and coordinate with government regulatory departments. (5) Establish a hierarchical legal responsibility system. In terms of civil liability, we should draw on the relevant experience of China's Tort Liability Law and Consumer Rights Protection Law, improve the system of civil liability investigation, and solve the problems of high costs and low benefits of safeguarding rights. In terms of administrative responsibility, corresponding administrative penalty rules shall be set according to the severity of the circumstances for the failure of intelligent platform enterprises to fulfill their compliance obligations. In terms of criminal responsibility, specific charges and statutory penalties can be set up in the Artificial Intelligence Code for the illegal behavior of generative artificial intelligence platform enterprises, users, and third parties, making the Artificial Intelligence Code a truly subsidiary criminal law.


Conclusion


As a cutting-edge technology, generative artificial intelligence has become a core area of international competition, and countries have launched their own generative artificial intelligence service products in an attempt to seize market share. The United States adopts a relaxed regulatory model that encourages innovation based on its technological advantages, while the European Union adopts a strict regulatory model based on its passive defense status. And our country is currently in the breakthrough stage of generative artificial intelligence technology. On the one hand, we need to actively encourage technological innovation to break the technological bullying and blockade outside the domain, and on the other hand, we need to defend against technological risks and technological alienation. This means that China cannot directly transplant the regulatory model of Europe and America, but needs to seek a balance between technological innovation and risk avoidance in order to overcome the Colin Grech dilemma of generative artificial intelligence technology. In this regard, China should adopt a dynamic regulatory model. Firstly, activate the effectiveness of corporate compliance autonomy to create the first line of defense against risks; Secondly, government regulatory agencies should adopt a compliance effect supervision mechanism to replace the traditional full process behavior constraint model, giving enterprises sufficient space and freedom to explore technological innovation; Finally, through experimental exploration of soft law, we will lead the improvement of hard law and ultimately shape the "Artificial Intelligence Code" that adapts to the development of the times and meets local needs.