Location : Home > Resource > Paper > Theoretical Deduction
Resource
Ji Weidong|When Will We Truly Enter the "Moment of AI Governance Legislation"
2024-06-14 [author] Ji Weidong preview:

[author]Ji Weidong

[content]

When Will We Truly Enter the "Moment of AI Governance Legislation"


*Author: Ji Weidong

University Professor of Humanity and Social Sciences and Director of the China Institute for Socio-Legal Studies Shanghai Jiao Tong University


Development and Security, Consensus on Value Judgments Not Yet Formed


In June 2023, the State Council included the draft AI Law in the 2023 annual legislative work plan. Against this backdrop, in July of the same year, the Cyber Information Office and six other departments jointly issued the "Interim Measures for the Management of Generative AI Services." This is the world's first regulation concerning large models and AIGC. Subsequently, in August, the Law Institute of the Chinese Academy of Social Sciences proposed the "Model Law on AI 1.0 (Expert Proposal Draft)." However, the five-year legislative plan released by the Standing Committee of the National People's Congress in September did not mention the AI Law. This significant discrepancy indicates that since the emergence of ChatGPT, the rapid iteration of AI has plunged us into an unknown chaos of human-machine coexistence, and the tension between technological and industrial development and human societal security has become unprecedentedly significant. Different demands have yet to find an appropriate balance or compromise point, leading legislative bodies to adopt a more cautious approach. In March of this year, researchers from seven universities, including China University of Political Science and Law, jointly released the "AI Law (Scholar Proposal Draft)" in an attempt to ignite a new wave of enthusiasm and promote the legislative process.


In fact, it is precisely because there is no consensus on value judgments and public choices between development and security that, after the European Parliament passed the negotiation authorization draft of the "AI Law" in June 2023, over 150 technology companies, including Siemens and Airbus, jointly issued an open letter of resistance, pointing out that overly stringent regulatory mechanisms stipulated in the bill would significantly compress technological innovation space and delay the development of the EU digital economy. Both the governance and legislation of AI should strengthen the participation of relevant industries. The dramatic political upheaval among OpenAI's senior management in November also fully reflects the intense struggle between the "development-first" and "security-first" approaches. Despite this, in March 2024, the European Parliament still formally passed the "AI Law" with a high vote. This means that the legislative body of the EU has established a value sequence that prioritizes security over development, requiring member states to strengthen the regulation of technological frontier expansion and application scenarios, while also attempting to implement a protectionist policy for the digital space within the region.


Almost simultaneously, over thirty domestic and overseas technology experts and industry leaders signed the "Beijing AI Safety International Consensus" in China, outlining several clear red lines for AI research and development and attempting to form an international cooperation mechanism based on this. The main contents of the Beijing Consensus include ensuring human control over the replication and iteration of AI systems, opposing the design of large-scale automated weapons, introducing a national registration system to strengthen supervision and international audits in accordance with global alignment requirements, preventing the proliferation of the most dangerous technologies, developing comprehensive governance methods and technologies, and establishing a stronger global security assurance network. An interesting and thought-provoking contrast is that while European technology companies fear that the EU's "AI Law" with its emphasis on regulation will inevitably erode 17% of AI industry investment, the "Beijing AI Safety International Consensus" calls on governments and companies to allocate one-third of their AI research and development budgets to the field of security assurance. This "33% vs. 17%" cost-benefit competition seems to constitute a new commanding height in the formulation of rules and regulations.


The Moment of AI Governance Legislation and the Orientation of Regulatory Priority


If the large video model Sora and the ultra-long text model Gemini1.5 Pro launched in early spring 2024 mark the phenomenological "unity of heaven and man" singularity, then the overwhelming voting result of Strasbourg parliamentarians and the international consensus of Beijing's technology industry jointly signify the "moment of AI governance legislation." Therefore, it is necessary to examine the status of the formulation and implementation of AI-related laws and regulations in various countries. So far, the institutional designs of various countries can be roughly divided into four models: the "hard law model," the "soft law model," the "combined hard and soft law model," and the "technical procedural law model."


It goes without saying that the newly passed European Union's Artificial Intelligence Act represents a hard law model, whose basic characteristic is that regulation outweighs research and development. This legislative concept can be traced back to Isaac Asimov's three laws of robotics proposed in the introduction to his short story collection published in 1950, and the recent academic expressions are the basic propositions put forward by W. Wallach and C. Allen in "Moral Machines" (published by Oxford University Press in 2009) and M. Anderson and S.L. Anderson in "Machine Ethics" (published by Cambridge University Press in 2011): 1. Ethical standards should be set for robots, including evaluation criteria for AI's own moral capabilities and moral principles for value ranking and selection when multiple ethical norms conflict; 2. The greater the degree of freedom of robots, the stricter the corresponding AI ethical standards should be. Following the idea of a positive correlation between AI performance or freedom and ethical standards or regulatory intensity, the European Union's Artificial Intelligence Act classifies AI risks into four levels: unacceptable, high, limited, and minimal, and stipulates different regulatory approaches accordingly. It is particularly noteworthy that the Act considers AI applications that are extremely harmful and violate European values, including the manipulation of individual behavior through social scoring systems, the application of real-time remote biometric recognition technology, and the introduction of predictive policing systems, as falling within the prohibited category. In addition, legal expert systems assisting judges and lawyers and intelligent trial projects are also identified as high-risk types requiring key supervision.


Different Forms of Soft Law and Their Combination with Hard Law


In contrast, the United States formulated the National AI Initiative Act in 2020, aiming to coordinate and accelerate AI research, development, and application, promote economic prosperity and national security, review AI governance pathways, and balance individual rights and technological innovation, representing a typical soft law model. The Blueprint for an AI Bill of Rights released by the White House Office of Science and Technology Policy in October 2022 and the AI Risk Management Framework released by the National Institute of Standards and Technology in January 2023 are also declarations of principles and policies. The Biden administration reached a voluntary security commitment with seven leading AI companies in July 2023 regarding AI research and development and application, which also has no legal binding force. Currently, multiple states in the United States have formulated AI regulations, such as Utah's AI Policy Act in March 2024, which provides regulatory relief for AI innovation through technical exemptions while stipulating administrative and civil penalties as sanctions. More than 30 other states are also considering AI bills. The content of these decentralized legislations mostly focuses on specific issues such as data privacy protection, algorithm interpretability, preventing AI discrimination, and protecting consumer rights. While the legislative proposals of U.S. federal legislators have varying positions, those that have been passed are all aimed at promoting AI industry development and ensuring the U.S. technological leadership. However, the Algorithm Accountability Act proposed by Senator Yvette Clarke and its updated versions in 2022 and 2023 exhibit a clear trend of hardening. The AI Accountability Act passed by the House Energy and Commerce Committee in July 2023 pressures the government to take substantive accountability measures to prevent AI risks after 2025.


Generally speaking, from the concept of P. Selznick and P. Nonnet's "responsive law" to the government's "Agile Governance" approach, the regulatory subjects and norms have obviously become more diversified. The World Economic Forum's definition of agile governance in 2018 describes it as "a set of resilient, fluid, flexible, or adaptive actions or methods, a self-adapting, people-centered, inclusive, and sustainable decision-making process." The Japanese government attaches great importance to agile governance in AI risk management, forming a comprehensive set of flexible and specific behavioral standards and operational procedures for AI research, development, and application. In April this year, the Ministry of Internal Affairs and Communications and the Ministry of Economy, Trade, and Industry jointly released the "Artificial Intelligence Industry Guidelines (Version 1.0)," which specifically defines the primary responsibilities of stakeholders in risk management and the blueprint for the design of agile governance mechanisms, further demonstrating the soft law characteristics of combining guiding norms with administrative promotion measures.


In 2019, China took the lead in advocating the principles of agile governance of artificial intelligence, focusing on the soft law model. However, in practice, it mainly relies on the exercise of administrative discretion tailored to local conditions, lacking a set of smoothly operable rules and flexible institutional arrangements. The advantage of this approach is that it can adapt to changing situations from top to bottom and alternate between soft and hard means for risk management. In September 2021, the National New Generation Artificial Intelligence Governance Special Committee issued the "Ethical Norms for the New Generation of Artificial Intelligence," taking the responsibility review and accountability mechanism in all stages of the AI life cycle as one of the basic principles. In March 2022, to implement the "Provisions on the Administration of Algorithm Recommendation for Internet Information Services" issued at the end of 2021, the "Internet Information Service Algorithm Registration System" was officially launched, forming a basic framework of "trinity" for AI supervision, including algorithm registration, inspection, and accountability. Specifically, the special actions on the governance of algorithm abuse in 2021 and the special actions on comprehensive governance of algorithms in 2022 severely punished irregularities such as "big data killing familiarity" and "platform monopoly of two choices," demonstrating the hard-line side of regulation. The "Provisions on the Administration of Deep Synthesis of Internet Information Services" issued in December 2022 began to target generative AI. Therefore, China's experience and mechanism design constitute a "combination of soft and hard law enforcement modes."


The path of technology-procedure leads to the blue ocean of technological enterprises


In implementing the principles and policies of AI governance, Singapore has taken a low-key and pragmatic technological route. In May 2022, the government launched the world's first open-source testing toolkit for AI governance, "AI.Verify," which integrates testing and inspection to achieve the goal of trusted AI through a dynamic adjustment process that is secure, flexible, transparent, auditable, accountable, and mutually balanced. This testing framework closely integrates safety regulation with AI's own performance improvement, and its application is voluntary, thus improving the acceptance of AI products and regulation. "AI.Verify" is also a treasure trove of technical tools, developing targeted regulatory testing solutions and corresponding testing toolsets and datasets for different industries, applications, and product forms. For example, in areas such as autonomous driving, where AI applications pose greater risks, more stringent and mandatory regulatory testing measures are needed. This "technical-procedure model" has the potential for widespread adoption. IBM's AI governance model "WatsonX.Governance" launched in December 2023 closely integrates risk prevention and control with the development of automated regulatory tools, providing AI "nutrition labels" in accordance with AI regulations and policies, achieving LLM indicators for proactive detection and bias reduction, similar to "AI.Verify." In addition, the Image World Model (IWM), which enhances AI's self-supervised learning capabilities, can also play a similar control role.


Here, it seems that the answer to how legislators can maintain an appropriate balance between AI development and social system security, and how relevant administrative departments can conduct agile governance, is already faintly visible. If the technological research and development of large models are not only the targets of AI governance but can also empower AI governance in return, technological enterprises will not be overly concerned about AI legislation. If the security research of large models can form a toolbox of testing, evaluation, and monitoring through the technology-procedure approach, including promoting digital watermarking technology, developing AI verification mini-models, forming AIGC anti-fraud systems, establishing AI ethical management indicator systems and certification platforms, and compiling AI security assurance networks, then regulation and development will no longer be a zero-sum game. AI governance can also open up new investment opportunities or market spaces for AI research and development, forming a technological blue ocean for enterprises through differentiated competition. In other words, only when there is a certain proportional relationship between the performance and safety of large language models and multimodal large models, and when regulation shifts away from being rigidly bound to predefined procedural and technological standards, will countries and the world truly enter the so-called "legislative moment for AI governance."