[author]ZHANG Tao
[content]
Zhang Tao
Associate Professor at the Institute for Data Law, China University of Political Science and Law
Abstract: The question of how to effectively govern artificial intelligence (AI) has recently attracted considerable scholarly attention, yet a definitive consensus remains elusive. From a national governance perspective, the efficacy of technical standards has gradually emerged as a key concept in both theoretical discussions and practical applications, positioning them as a central instrument in AI regulation. Fundamentally, integrating technical standards into the AI governance framework aligns with principles such as embedded ethics, governance ecology, knowledge co-creation, and hierarchical governance. However, in practice, the standardized governance of AI encounters challenges related to legitimacy, scientific validity, coordination, and effectiveness, which impede its capacity to optimize regulatory outcomes. This underscores the need for selecting appropriate regulatory models for reform. Compared to purely administrative regulation or self-regulation, co-regulation better aligns with the intrinsic nature of AI standards and effectively addresses the practical challenges encountered. The governance of AI standardization based on co-regulation can be realized through the institutional arrangements that define the functional roles, content frameworks, formulation procedures, and oversight mechanisms necessary for effective implementation of AI standards.
Keywords: Technical standards; Artificial intelligence; Co-regulation; Self-regulation; Regulatory tools
1. Problem Presentation
In recent years, the rapid development and wide application of artificial intelligence (AI) technology have profoundly influenced all aspects of society. Whether in autonomous driving, intelligent healthcare, fintech, or in manufacturing, public services and other fields, AI is changing human production and lifestyle at an unprecedented speed. However, the wide application of AI technology has also brought complex social risks, including technical ethical issues such as privacy protection, algorithm transparency, and algorithmic discrimination, as well as legal issues such as personality protection, copyright ownership, and liability for infringement caused by automated decision-making. The traditional legal framework and governance model face what is called a "pacing problem" when dealing with these emerging issues. The reason is that many existing legal frameworks are based on a static rather than dynamic understanding of society and technology, and the ability of legal institutions (including legislative, executive, and judicial bodies) to adapt to technological changes is also declining. Therefore, it is urgent to explore new regulatory models that adapt to the development of AI. As Benjamin Cardozo once said, "Once new conditions arise, there must be new rules."
Standardization is a way to regulate social life, aiming to achieve a lasting and unified order across time and space. As a polysemous term, "standard" refers both to a model of measurement and to the rules and norms that need to be followed to achieve that ideal. With the advent of the risk society, regulatory agencies, in seeking to integrate professional knowledge to address risks, have gradually turned their attention to technical standards to promote the realization of legal requirements and ethical values. In the governance of AI, technical standards have gradually become a powerful regulatory tool. Technical standards can promote the safety, controllability, and transparency of AI systems by regulating their design, data processing, application processes, and other links, reducing uncertainties and risks in the application of technology, and ensuring that the development of AI technology complies with legal, ethical, and social requirements. In China, the National Standardization Administration Committee and four other departments issued the "National New Generation Artificial Intelligence Standard System Construction Guide" as early as July 2020, making top-level designs for standardization in the AI field and proposing to "form a new pattern where standards lead the comprehensive and standardized development of the AI industry." In comparative law, Article 61 of the preamble of the EU's "Artificial Intelligence Act" stipulates that standardization should play a key role in providing technical solutions for providers to ensure compliance with legal requirements and keep pace with the latest technology, thereby promoting innovation and the competitiveness and growth of the single market. At the international level, organizations such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have begun to carry out standardization work in the AI field, formulating a series of standards covering concepts and terms, data quality, risk management, trustworthiness, explainability, and ethical impact of AI.
However, the application of technical standards in the governance of AI also faces a series of challenges. First, the complexity and rapid development of AI technology make the formulation of standards lag behind the technology itself. Many technical standards may not be able to cope with new technological developments and application needs, and their effectiveness and adaptability are challenged. Second, the cross-disciplinary nature and global development of AI technology require that the formulation of standards must coordinate the interests and needs of different countries and industries. However, current standard formulation is mostly led by technical experts within the industry, lacking broad social participation, which leads to doubts about the legitimacy and scientific nature of the standards. In addition, AI technology itself involves many ethical and social issues. How to embed these ethical and social values in technical standards and make them truly effective in application remains an urgent problem to be solved. Although standardized governance has been widely present in China's national governance system, in current legal research, technical standards remain a relatively marginal topic. Only a few research results have been scatteredly explored from the perspectives of civil law, administrative law, anti-monopoly law, copyright law, etc., lacking discussions on the "governance mechanism" in the process of standard formulation and implementation. Systematic research on the standardized governance of artificial intelligence is even rarer. In view of this, this article mainly focuses on the following issues for discussion: What is the current application status of technical standards in the governance of artificial intelligence? What theoretical logic lies behind it? What practical difficulties does it face? And what models and paths should be adopted to shape the standardized governance of artificial intelligence? Through in-depth discussions of these issues, this article aims to provide theoretical support and policy suggestions for the standardized governance of artificial intelligence in China, promote the effective regulation of artificial intelligence technology development and application by technical standards, and promote the healthy and orderly development of artificial intelligence technology.
2. Technical Standards as a Tool for AI Governance
Technical standards have become an increasingly important tool in the regulatory toolkit of modern states, especially in risk governance. Standards not only bring a sense of certainty to an uncertain world but also help bridge communication gaps among different stakeholders and viewpoints. With the rapid development and wide application of AI technology, technical standards have gradually become an important tool in its governance system. On the one hand, compared to other technologies, the AI field has a clear dependence on standards; on the other hand, AI standards, as the core of the digital and intelligent society, have significant social implications. AI standards are not only the guarantee of interoperability between technical products and services but also a key mechanism to ensure that the application of AI technology complies with ethical, legal, and social requirements. Currently, the embedding of technical standards in AI governance has made certain progress, but it is underpinned by multiple theoretical logics and still faces many challenges and issues that need to be addressed.
2.1 An Overview of the Embedding of Technical Standards in AI Governance
Globally, the use of technical standards as a key tool for AI governance has become a consensus, but different countries and regions have formed distinctive practical paths based on their legal traditions and policy orientations.
2.1.1 Systematic Construction of AI Standards in China
China attaches great importance to the standardization work in the AI field and has initially established a systematic framework consisting of top-level strategic planning, multi-level standard systems, and specialized organizational institutions. Since the State Council issued the "New Generation Artificial Intelligence Development Plan" in 2017, the national level has provided continuous top-level design for standardization through the release of documents such as the "White Paper on AI Standardization (2018 Edition)", the "Guidelines for the Construction of the National New Generation AI Standard System" (issued in 2020), and the "Guidelines for the Construction of the National AI Industry Comprehensive Standard System (2024 Edition)". This has established a comprehensive standard framework covering basic commonalities, key technologies, industry applications, and security governance. To ensure the implementation of AI standardization governance, China has successively established the National AI Standardization General Group, the Expert Consultation Group, and the AI Standardization Technical Committee of the Ministry of Industry and Information Technology, forming a coordinated management mechanism. At the practical level, a multi-level standard system covering national standards, local standards, group standards, and enterprise standards is taking shape, with extensive coverage of data, algorithms, platforms, security, and ethics, promoting the standardized development of the AI industry from various dimensions.
2.1.2 Diversification of AI Standardization Practices Abroad
Internationally, the AI standardization practices of major economies and international organizations show significant differentiation, mainly manifested in three paths: the "legislation-driven" approach of the European Union, the "market-led" approach of the United States, and the "consensus-building" approach of international organizations. The EU's path is centered on legislation, featuring a rights-oriented and strong integration approach. Its policy documents and the "Ethics Guidelines for Trustworthy AI" have laid an ethical foundation for standardization. Particularly, the EU's "AI Act" innovatively introduced the "harmonized standards" mechanism, making compliance with harmonized standards an important basis for proving that AI systems meet legal requirements, thus closely coupling technical standards with the legal framework. Institutions such as the European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) are actively formulating corresponding standards to support the implementation of the EU's "AI Act". The US approach, on the other hand, emphasizes market leadership and industry self-regulation, stressing the flexibility of innovation and global competitiveness. The documents such as the "Artificial Intelligence Risk Management Framework" released by the National Institute of Standards and Technology (NIST) mainly serve as guiding tools to help enterprises voluntarily manage risks. The "Global Engagement Plan for Artificial Intelligence Standards" released by NIST in July 2024 clearly states that standardization should be a private-sector-led and market-driven activity, with the government's role being to provide support as one of many stakeholders through means such as basic research, coordination, and education, rather than through mandatory intervention. International standardization organizations (such as ISO/IEC) are dedicated to building global technical consensus. The JTC 1/SC 42 Artificial Intelligence Subcommittee it established has issued dozens of international standards in fundamental and general areas such as artificial intelligence concepts and terminology, risk management, governance impact, and credibility. These standards aim to promote the interoperability of global technologies and products, provide important references for countries to formulate domestic standards, and constitute the infrastructure for global artificial intelligence governance.
2.2 The Diverse Theoretical Logics of Artificial Intelligence Standardization Governance
"Standardization is a microcosm of social practice, political preference, economic calculation, scientific necessity, and professional judgment." The basic function of technical standards is to regulate technical behavior and promote the safety, reliability, and interoperability of products or services from different suppliers. They are also influenced by broader policies. In the governance of artificial intelligence, technical standards are not only tools for regulating specific technical aspects such as algorithms, data usage, and system design, but they also bear the important mission of balancing technological innovation with social responsibility and public interest. From a governance perspective, the theoretical logic of embedding technical standards in artificial intelligence governance can be analyzed from the following four dimensions.
2.2.1 The Ethical Embedding Logic of Technical Standards: The Organic Integration of Technology and Social Ethics
The rapid development of artificial intelligence technology has brought unprecedented ethical challenges, such as data privacy, algorithmic discrimination, and unfairness in automated decision-making. A major limitation of traditional technical ethics assessment is its retrospective nature, meaning that ethical issues can only be effectively addressed after they are identified, and identifying these issues often depends on responding to the harm they cause, leading to long-term delays in providing ethical guidance. Many traditional technical standards often focus on regulating technical operations, but in the field of artificial intelligence governance, the role of technical standards has expanded beyond regulating technical behavior to also embedding social ethics and legal values. Embedded ethics advocates that through the formulation of technical standards, social ethical principles such as fairness, transparency, human dignity, and autonomy are incorporated into the design and implementation of technology, thereby ensuring the coordination and balance between technological development and social needs. This is also in line with the concept of "regulation through design" proposed by the legal community.
From the perspective of artificial intelligence standardization governance practice, technical standards are not merely technical operation guidelines or benchmarks but incorporate considerations of social ethical responsibility. As scholars have pointed out, "standards are both conceptual and material," which reflects the logic of ethical embedding. For example, in the application of artificial intelligence, technical standards can require developers to consider principles of fairness and non-discrimination when designing algorithms and provide specific standards for the transparency and explainability of technology. This "ethical embedding" logic not only enhances the social acceptance of technical standards but also helps predict, identify, and solve ethical issues in artificial intelligence innovation, thereby benefiting individuals and society.
2.2.2 The Governance Ecological Logic of Technical Standards: The Network Effects of Stakeholders
From the perspective of governance ecology theory, value creation is no longer characterized by linearity but occurs in a network composed of interrelated actors and organizations that leverage their respective capabilities and expertise to jointly achieve value creation. Therefore, the collaboration among all parties in an ecosystem can achieve value creation that individual participants cannot. The governance of artificial intelligence is a complex social process involving multiple stakeholders such as governments, enterprises, industry associations, academia, and social groups. They form a stakeholder network and cooperate and compete with each other, thus creating a network similar to a biological ecosystem. Therefore, the success of artificial intelligence governance depends on the collaboration and co-governance among these multiple stakeholders.
The standardization practices in the field of artificial intelligence deeply confirm and specifically explain the theoretical core of "governance ecology". In this ecosystem, the creation of technical standards is not a one-way power imposition but a product of the collaborative game and joint construction of multiple stakeholders. It is ensured through a distributed "shared governance" architecture that standards are internalized and followed in the industrial ecosystem. This means that the formulation and implementation of technical standards are not only the tasks of technology developers and regulators but also a multi-party participation and collaborative process that can generate a "network effect", and the increase in network value will directly benefit all network participants. For example, governments and regulatory agencies can formulate policies and regulations to ensure the compliance and public interest of standards; academia can provide technical research and theoretical support to ensure the scientific and forward-looking nature of standards; and the industry can promote the practicality of standards based on the needs and challenges in actual applications.
2.2.3 The logic of knowledge co-creation in technical standards: Interdisciplinary collaboration and knowledge integration
The traditional knowledge exchange model is usually linear, that is, a one-way flow from science to industry, with academia typically regarded as the producer of knowledge and the industry as the receiver and user of knowledge. This may lead to a lag in knowledge and difficulty in quickly adapting to market demands and technological changes. Knowledge co-creation transcends the traditional linear model and, through interdisciplinary collaboration, can achieve multi-directional flow, sharing, and integration of knowledge, along with real-time feedback mechanisms, thereby promoting the mutual influence and improvement of academic research and industrial applications.
In the field of artificial intelligence, the formulation and implementation of technical standards essentially embody the logic of knowledge co-creation: First, the multi-disciplinary integration in the standard formulation process. Artificial intelligence itself is a social technology with a highly interdisciplinary nature, covering technical fields such as computer science, statistics, control theory, and data science, while also being closely related to soft disciplines such as law, ethics, sociology, and philosophy. The formulation of technical standards is not limited to using computer science knowledge to set technical benchmarks but is a process of cross-disciplinary knowledge sharing and cooperation, reflecting that the governance of artificial intelligence as a social technology phenomenon not only relies on the promotion of technological forces but also requires the integration of knowledge from various fields. Second, collaborative innovation in the implementation of standards. Technical standards are not only norms but also drivers of innovation. There is a flow of technical knowledge between standard formulators and users, which can effectively save information flow and costs in the innovation process. "By establishing technical benchmarks for the progressive improvement of products, latercomers do not need to repeat the costs of creating the initial product but can rely on certain functions in existing products and related products." The implementation of artificial intelligence standards still requires the integration of knowledge from various disciplines to design corresponding supporting tools (such as monitoring, evaluation, certification, auditing, etc.), thereby promoting the popularization and application of artificial intelligence technology and forming a new innovation ecosystem.
2.2.4 Hierarchical governance logic of technical standards: Promoting adaptive governance through modularity
Different artificial intelligence technologies (such as natural language processing, image recognition, etc.) have significant differences in their application scenarios, data processing, algorithm decision-making, etc. Therefore, it is impossible to rely on a single, unified technical standard for comprehensive governance. In this regard, during the process of constructing the governance framework for artificial intelligence, "hierarchical governance" has gradually attracted attention. Modularity is one of the main mechanisms for managing complex systems, aiming to reduce the number of interdependencies that must be analyzed by determining which tasks are highly interdependent and which are not. Hierarchical governance is a special form of modularity, in which different parts of the entire system are arranged in parallel hierarchical structures.
Theoretically, technical standards can be classified into multiple attributes, including hierarchical attributes, professional attributes (or domain attributes), usage attributes, component attributes, etc. According to the different characteristics, functions, uses, and impacts of artificial intelligence, artificial intelligence technical standards can be divided into different levels, corresponding to different governance requirements and measures, to achieve differentiated, hierarchical, dynamic, and flexible governance methods. For example, the previously mentioned "National Artificial Intelligence Industry Comprehensive Standardization System Construction Guide (2024 Edition)" has determined different key directions for different types of technical standards. Among them, for basic common standards, the key directions are standards related to the academic concepts, reference architectures, testing and evaluation, management, and sustainability of artificial intelligence; for intelligent product and service standards, the key directions are standards related to intelligent robots, intelligent transportation tools, and intelligent mobile terminals.
2.3 The Practical predicaments of technical standards in Artificial Intelligence governance
Standardization activities that ignore broad interests and are driven by technocrats may raise some fundamental issues, such as the legitimacy and validity of standards. Although theoretically, the standardized governance of artificial intelligence contains many theoretical logics and has carried out a certain degree of institutional practice, the practical application of technical standards in the governance of artificial intelligence still faces many challenges. These challenges not only involve the legitimacy, scientificity, coordination and effectiveness of artificial intelligence standards, but also the complex interactions between technology and society, ethics and law, standards and law.
2.3.1 Legitimacy dilemma: The issue of substantive legality and procedural legitimacy of standards
For a long time, standardized legitimacy has been a concern in academic literature. Political science theory typically examines legitimacy from three dimensions: input, throughput, and output, which is highly consistent with procedural legitimacy (input and process) and substantive legitimacy (output) in the legal context. Although technical standards are different from laws, the formulation of standards is very similar to legislation. For instance, in the concept of "code as law", the designers of Internet norms at different levels of network architecture are sometimes compared to "legislators" because they influence the behavior of actors in a way similar to that of legislators. In this sense, as a technical specification, technical standards can have an impact on the benefits and risks brought about by users and third parties' use of technology, and should also meet the legitimacy requirements in terms of substance and procedure. The legitimacy of standardization is related to the actual content of the standards and the way and extent to which all stakeholders can contribute to the decision-making process.
The application of technical standards in the governance of artificial intelligence has triggered a profound legitimacy crisis, the root cause of which lies in the structural contradiction between the quasi-legislative effect of the standards and the democratic deficit in their formulation process. Standards, especially those that gain factual enforceability due to network effects or legal references, profoundly shape market behavior and social relations without undergoing the rigorous procedures and legality tests required by traditional legislation. This crisis unfolds simultaneously in both procedural and substantive dimensions, and the two are closely intertwined. The absence of procedural legitimacy is a manifestation of the problem. For a long time, the standardization process has been criticized as a closed, technocratic "club politics". Despite being nominally open, in reality, industry giants, leveraging their resource and information advantages, have taken the leading position in the standard-setting agenda. The effective participation channels for small and medium-sized enterprises, civil society organizations and even the general public are extremely limited. This not only leads to a serious imbalance in representativeness, but also makes the consultation process often degenerate into a game of technical interests rather than a review of public rationality. Furthermore, the opacity of the decision-making process, the knowledge barrier of "paid access", and the formalization of the appeal mechanism have further exacerbated this procedural predicament, making it incompatible with the principles of openness, participation, and accountability required by modern rule of law.
The closed nature of the procedure inevitably spreads to the substantive content, leading to the problems of the legality and public nature of the standards. Firstly, the formulation of standards lacks a systematic and legal review mechanism. In an environment that is divorced from sufficient public debate and legal supervision, technical rationality can easily override value rationality, leading to the content of standards possibly deviating from fundamental legal principles and public policy goals such as privacy protection, fairness and justice, and sustainable development, and even creating "technical shortcuts" to evade legal obligations. Secondly, the value orientation of artificial intelligence standards may be captured by a few interest groups. When standards become tools for solidifying specific technical paths and building market barriers, they are transformed from public goods that promote interoperability into private tools that restrict competition and harm consumers' rights and interests. This substantive bias is a direct consequence of the lack of due process and may eventually shake the social trust foundation of technical standards as governance tools.
2.3.2 Scientific Dilemma: Knowledge asymmetry and technical complexity in the standard-setting process
Standards are not set by people based on their subjective will. The content in standards is usually based on scientific research achievements and practical experience. In other words, standards must be scientific and practical. The scientific predicament faced by artificial intelligence standardization transcends the general technical complexity and touches upon the scientific and rational foundation on which standardization activities rely. This predicament is mainly manifested in two intertwined situations: the "semantic transformation deadlock" and the "regulatory timing paradox".
Firstly, the formulation of standards is deeply trapped in a cross-disciplinary "semantic transformation deadlock". As a typical social technology system, the standardization of artificial intelligence is bound to be a process where knowledge from multiple disciplines converge. However, there are profound cognitive barriers and "semantic frictions" among different disciplines. Core governance concepts such as "fairness", "transparency", and "interpretability" have distinct connotations and references in discourse systems like computer science, law, and ethics. This semantic incompatibility makes it extremely difficult to reach a standard term that is recognized by all parties and precisely defined. The deeper challenge lies in how to effectively "transform" these normative principles that carry significant social value into technical indicators and engineering requirements that can be understood, measured and verified by computer engineers. At present, the path from the value judgment of "what should be" to the technical realization of "what is actually" still lacks a mature and recognized scientific methodology, which makes many ethical standards remain at the high-level principle level and difficult to be implemented.
Secondly, the formulation of standards is also constrained by the "paradox of regulatory timing". This is prominently manifested in the classic "Collingridge dilemma" : In the early stage of technological development, although we have the ability to change its trajectory, we do not know how to act due to the lack of understanding of its long-term impact. However, when the technology matures and its impact becomes apparent, although we have clearly seen the problem, we lose the ability to effectively intervene because the technology has been deeply embedded in the social system. The exponential development speed and evolutionary uncertainty of artificial intelligence technology make this predicament particularly acute, and standardization work has always been swinging between "too early" and "too late". Meanwhile, the scientific maturity of different technical branches of artificial intelligence is extremely uneven. As relevant research has pointed out, although there is a solid scientific foundation for standardization in areas such as conceptual terms, scientific research is still in the exploratory stage on key governance issues such as robustness verification and explainability methods, and a stable consensus has yet to be formed. Against this backdrop of uneven scientific foundations, forcibly promoting comprehensive standardization not only may stifle innovation but also may solidify immature or even erroneous scientific assumptions into technical norms, posing profound risks.
2.3.3 Coordination Dilemma: Conflicts of interest and games among multiple subjects
Technical standards are far from neutral technical texts; rather, they are the best means to achieve "formal objectivity", which is insightful, positional and non-neutral. It is combined with elements such as power, and its formation process is one of expansion from the center to the periphery, inevitably accompanied by the distribution of resources and interests. From the perspective of economic functions, standardization can enhance market consistency and promote the popularization and application of technologies. However, excessive standardization, especially under the dominance of certain specific companies or interest groups, may restrict innovation, reduce market competition, and hinder the rapid development of emerging technologies. Therefore, the process of standardization is essentially a complex game of interests. The formulation and implementation of artificial intelligence standards involve numerous stakeholders, including governments, enterprises, technology developers, academia, social organizations, and the general public. The interests, values, technical demands and social responsibilities of these entities vary. How to coordinate the interests of all parties and promote the unification and coordination of standards is a major challenge currently faced by the governance of artificial intelligence standardization.
At the international level, the standardization of artificial intelligence technology is confronted with a significant game of national and economic interests. In recent years, with the "AI regulatory race" brought about by the "AI race", there have been significant differences in standards between the United States and Europe in areas such as data protection, privacy rights, and algorithm transparency, resulting in a lack of uniformity and coordination in AI standards on a global scale. The United States leans more towards market dominance in artificial intelligence standards, emphasizing technological innovation and the maximization of economic efficiency. In contrast, the European Union places greater emphasis on social responsibility and ethical norms, stressing the protection of personal privacy and social justice. This conflict of transnational interests has turned the formulation of international standards into a stage for geopolitical and economic competition, thereby affecting the synergy of global governance.
At the domestic level, the formulation of artificial intelligence standards also faces a game between industries and interest groups. For instance, when enterprises formulate standards, they often hope to integrate the standards with their own technological advantages and commercial interests. This may lead to the standards being overly inclined towards a certain aspect of technological development, while neglecting public interests and legal compliance. Governments and industry associations are often in a dilemma. On the one hand, they need to encourage technological innovation; on the other hand, they must ensure that technical standards are formulated under the premise of safeguarding public interests and abiding by the legal framework. This multi-party interest game has exacerbated the coordination predicament in the formulation and implementation of artificial intelligence standards.2.3.4 Effectiveness Dilemma: Insufficient enforcement and supervision mechanisms for standard implementation
"Once a standard is released, if it merely remains on paper without being implemented, it will not automatically transform into productive forces, will not produce any effect, and will have no effect at all." "This means that the effectiveness of technical standards not only depends on the legitimacy and scientific nature of the standards, but also on the execution and supervision mechanisms during their implementation process. However, from the perspective of standardization governance practices in other fields in our country, there exists a problem of "emphasizing standard formulation but neglecting standard implementation". In cutting-edge technology fields such as artificial intelligence, the effective implementation of standards not only relies on technological maturity and industry self-awareness, but also requires a supervision and compliance system that covers the entire life cycle and involves cross-departmental collaboration. However, this is precisely the weak link in current governance practices.
Although many countries (regions), including our country, have formulated corresponding technical standards in the field of artificial intelligence, in practice, how to effectively implement these standards remains a prominent issue. The root cause of its predicament lies in three interrelated structural flaws: Firstly, the vast majority of current artificial intelligence standards are voluntary, and their implementation relies on the conscious compliance of market entities, without strong legal safeguards. This leads many enterprises and organizations to often adopt marginal operations or selective compliance when facing compliance pressure, thus resulting in inadequate implementation of standards. Secondly, at present, China's artificial intelligence regulatory system is still in the process of gradual construction. Relevant laws, regulations and regulatory mechanisms are still in the development stage, and there is a lack of a regular administrative regulatory mechanism for the compliance inspection of artificial intelligence standards. Finally, although China has gradually promoted the third-party certification and evaluation mechanism, there are still obvious deficiencies in the soundness of this mechanism. Many third-party certification institutions have not yet gained sufficient credibility, and their assessment scope and depth cannot adapt to the complexity of artificial intelligence technology.
3. Model Selection for Standardized Governance of Artificial Intelligence: Moving towards Co-regulation
In the rapidly developing field of artificial intelligence, the choice of a standardized governance model is of vital importance. In fact, no regulatory model is perfect, especially since there is no regulatory model applicable to all situations. The appropriate test for the choice of regulatory model does not lie in determining whether it is perfect, but in determining whether there is an optimal approach. Although the administrative regulation model can help alleviate the doubts about the legitimacy and authority of the standardized governance of artificial intelligence, and the self-regulation model can fully leverage the professional expertise of the industry in standard setting, with the continuous deepening of the application of artificial intelligence, relying solely on administrative regulation or self-regulation is difficult to cope with the increasingly complex and changeable regulatory environment. The standardized governance of artificial intelligence in the future should go beyond administrative regulation and self-regulation models. By establishing an institutionalized framework, it should promote in-depth cooperation among multiple parties such as the government, industry organizations, enterprises, and academia, and move towards a more complete, scientific, and effective co-regulation model.
3.1 Deficiencies in the standardized governance of artificial intelligence based on administrative regulations
Incorporating artificial intelligence standards into the traditional administrative regulatory framework, that is, regarding the standards as public norms formulated and implemented under the leadership of the state, undoubtedly holds theoretical appeal. This approach seems to be the most direct way to respond to the demands for the legitimacy and authority of standards, ensuring their implementation through the coercive power of the state. The design of the "harmonized standards" in the EU's Artificial Intelligence Act, which is linked to legal liability, demonstrates the strong appeal of this approach. However, when we examine the unique nature of artificial intelligence as the regulatory object, the inherent limitations of the administrative regulatory model are fully exposed. The core predicament lies in the fact that the stability, proceduralism and knowledge structure on which the traditional administrative bureaucracy relies for effective operation have formed a structural conflict with the dynamic evolution, high complexity and distributed nature of knowledge of artificial intelligence technology. On the one hand, administrative regulation is a typical form of "hindsight" governance. Its rigorous and rigid legislative and decision-making procedures, while endowing it with stability, also determine that it cannot match the exponential iteration speed and "escape" innovation trajectory of artificial intelligence technology. This "pace problem" has been sharply magnified in the field of artificial intelligence. Any technical standards that attempt to solidify through executive orders may become outdated by the time they are promulgated, thereby leading to the continuous failure of regulation.
On the other hand, the high complexity and interdisciplinary nature of artificial intelligence pose an unprecedented challenge to the cognitive capabilities of regulatory agencies. Even if administrative agencies can concentrate the most outstanding expert resources, it is still difficult for them to overcome the "epistemological deficit" they face in this emerging field. In the field of artificial intelligence governance, if the government is completely relied upon as the sole entity to formulate and supply technical standards, it often faces two typical risks of regulatory failure: First, in pursuit of the wide applicability of standards, they may be distorted into highly abstract principle declarations, thereby losing their effective guiding function for industrial practices; Secondly, when regulatory intentions delve into technical details, the government often, due to the inherent "information and knowledge gap" between itself and the technological frontier, leads to omissions in the standard content and even constitutes a distortive intervention in the technological evolution route. Therefore, the pure administrative regulatory approach may not only fail to effectively govern artificial intelligence, but also, due to its inherent structural flaws, stifle the vitality of technological innovation and ultimately harm social well-being. This is not merely a matter of insufficient government capacity, but rather a systemic functional inadaptability exposed by the traditional regulatory paradigm when dealing with disruptive technologies.
3.2 Reflections on the Standardized Governance of Artificial Intelligence Based on Self-Regulation
Given the inherent limitations of administrative regulation, turning to self-regulation that relies on industry expertise and market efficiency seems to be a shortcut for the standardized governance of artificial intelligence. This model, with its low information cost, high flexibility and acute response to technological innovation, has a strong appeal to both regulators and the industry. Self-regulation promises an "agile governance" that can keep pace with technological progress. The market-driven and industry-led path advocated by the United States in the standardization of artificial intelligence is a concentrated embodiment of this concept. However, the efficiency myth of self-regulation may come at the cost of suspending public accountability. Its core mechanism lies in transferring the right to create norms to private entities, which is essentially a kind of "privatization of norms". When the objects of regulation are limited to internal affairs within the industry, this model is still tolerable. However, when it comes to the field of artificial intelligence, which profoundly affects individuals' basic rights, social equity and public safety, its inherent legitimacy deficit is fully exposed. The most fundamental predicament of self-regulation lies in the fact that it creates an "accountability vacuum" where power and responsibility do not match: private industry associations or technical alliances have obtained the power to formulate quasi-public rules that affect the general public, but they do not need to be accountable to any public institution, nor have they been authorized and supervised through democratic procedures.
Under this structure, the standard content is extremely vulnerable to being "captured" by powerful commercial interests. The profit-driven motive naturally drives enterprise alliances to set standards in a direction that is conducive to consolidating their technological advantages, building market barriers or reducing compliance costs, while public interests, such as protecting vulnerable groups, promoting algorithm transparency and safeguarding data privacy, may be selectively ignored or even sacrificed. Furthermore, responsible self-regulation requires a mature, stable and highly ethical consensus industry community as its foundation. The current artificial intelligence industry has a complex composition of participants, strong mobility and huge differences in scale, and is far from forming a stable professional community with endogenous binding force. Therefore, expecting such an evolving industry to exercise effective self-restraint is tantamount to putting public welfare at great risk of uncertainty. Simple self-regulation may not solve the problem. Instead, it may, under the guise of "technological neutrality", actually carry out "interest solidification", ultimately exacerbating rather than alleviating the social challenges brought about by artificial intelligence.
3.3 The standardized governance of artificial intelligence should adopt a cooperative regulatory model
In the face of the limitations of administrative regulation and self-regulation, the theoretical and practical circles have explored many alternative solutions, such as meta-regulation or mandatory self-regulation, responsive regulation, ingenious regulation, co-regulation, etc. Among them, co-regulation is widely used in the fields of Internet information such as audio-visual media supervision, data privacy, domain name governance, content filtering, Internet security, and network neutrality. It is generally believed that the cooperative regulatory model can be defined as not only regulating social phenomena through top-down supervision, but also involving private stakeholders in the rule-making process. In other words, under the co-regulation model, "policy travel" can be bidirectional: it can either be a bottom-up transformation from self-regulation to co-regulation, or a top-down shift from administrative regulation to co-regulation.
Similar to administrative regulation and self-regulation, co-regulation also has various manifestations. They share the following common features: First, they are problem-oriented, tend to cooperate rather than confront, and fully utilize corporate social responsibility as an incentive factor for corporate behavior; Second, it relies on enterprises or industry associations to perform various government functions. Government departments are the conveners and facilitators of negotiations among multiple stakeholders. Thirdly, the regulatory guidelines for cooperation are not as mandatory as national laws and regulations (which usually tend to stipulate actions that must be taken), but are more open (stating general intentions or expected results), thereby allowing the regulated parties to have greater discretion when formulating specific implementation plans. Fourth, the participation of stakeholders and affected parties in all stages of the regulatory decision-making process can effectively solve problems, as the regulated parties are usually more willing to abide by the rules they have participated in formulating, thereby increasing the compliance rate. Fifth, co-regulation has changed the traditional role positioning of the government, transforming it from formulating rules and imposing sanctions when industries violate them to providing incentives while implementing regulatory plans and retaining the ultimate supervisory and enforcement powers. A new type of governance structure composed of multiple participants, networked connections and institutionalized arrangements is gradually replacing or reinforcing the traditional hierarchical supervision system centered on the government.
It is precisely based on the considerations of the above characteristics that there are constantly viewpoints advocating that co-regulation should be used to address issues such as algorithm black boxes and information privacy protection caused by artificial intelligence. This article holds that, compared with administrative regulation and self-regulation, the co-regulation model is more suitable for the standardized governance of artificial intelligence. Firstly, the co-regulation model conforms to the power structure in the process of artificial intelligence standardization. Against the backdrop of the continuous advancement of the privatization wave, the role of private actors in national governance has become increasingly prominent, and various "hybrid rules" have emerged one after another. According to scholars' analysis, the mixed rules reflect a new stage in the evolution of state power. At this stage, the power of the state was not merely controlled by traditional government institutions, but was strengthened through privatization and the subsequent depoliticization process in the public domain. The cooperative and co-governance relationship between the state and private actors gradually replaced the traditional government-led model, forming a new power pattern. From the perspective of Weber and Foucault's theory of power, technical standards exhibit the characteristics of "mixed rules". They are not merely a collection of technical and operational norms; in fact, they reflect how power is exercised globally through technical standards and, in this way, influence the behavioral decisions of organizations and individuals as well as the cultural values of society. Therefore, the co-regulation model helps establish a multi-level regulatory system by integrating "power" from different fields (including public and private power), thus enabling the standardization process of artificial intelligence to no longer rely solely on single government intervention or private self-discipline, but to achieve a dynamic balance through the participation and collaboration of multiple subjects. To domesticate this new type of "hybrid authority".
Secondly, the co-regulation model can better align with the theoretical logic contained in the standardized governance of artificial intelligence. The development of artificial intelligence technology is not merely a breakthrough at the technical level, but also involves profound social and ethical issues, such as privacy protection, fairness, and transparency. The co-regulation model, through the participation of multiple subjects, can ensure that ethical review runs through the entire process of standard formulation. The co-regulation model can provide a dynamic and flexible framework. Through the cooperation of multiple parties such as the government, enterprises, industry organizations, academia, and technology developers, a collaborative, interactive, and complementary governance ecosystem is formed. This governance model enables all parties not only to provide professional opinions during the standard formulation process but also to play an active role in the standard implementation process.
Finally, the co-regulation model is conducive to resolving the current predicament faced by the standardized governance of artificial intelligence. In the process of standardized governance of artificial intelligence, legitimacy not only comes from the coercive power of the government, but also from the recognition and support of all sectors of society. Under the co-regulation model, the extensive participation of multiple stakeholders such as the government, enterprises, academia and the public can make the process of formulating artificial intelligence standards more open and transparent, ensuring that the interests and voices of all parties are fully reflected. This kind of multi-party collaboration can enhance the social foundation of standards and improve their legality and enforceability. Furthermore, the effectiveness of artificial intelligence standards depends not only on their formulation process but also on their enforceability in practical applications. Under the co-regulation model, the implementation and supervision of artificial intelligence standards are jointly undertaken by multiple entities. Among them, the state can still retain supervision over the standards by setting up corresponding mechanisms to ensure that problems can be identified and adjusted in a timely manner in practice. This hierarchical governance and dynamic update model can ensure that standards are continuously optimized in accordance with changes in artificial intelligence technology and society, thereby enhancing their implementation effectiveness.
In conclusion, the artificial intelligence standardization governance model based on co-regulation can effectively address the current predicaments of legitimacy, scientificity, coordination and effectiveness faced by artificial intelligence standardization governance through the collaborative efforts of multiple entities. It can provide theoretical support and practical paths for building a more complete and sustainable artificial intelligence standardization governance system.
4. Institutional Design for Standardized Governance of Artificial Intelligence: Based on the Path of Co-regulation
Although the role of technical standards in the field of emerging technology governance is increasingly prominent, they are not perfect regulatory tools. If used improperly, they may still cause many negative effects and need to be carefully examined from the perspective of "regulatory governance". To ensure the effective implementation of the co-regulation model, its institutional design must respond to the aforementioned predicament of standardized governance and absorb the essence of theories such as responsible innovation and due process. The theory of "structural coupling" between the legal system and other social subsystems (such as technology and economy) in systems theory law provides profound inspiration for us to understand the interaction between standards and law. It enlighten us that effective governance is not achieved through direct control by one party over the other, but rather by creating institutionalized interfaces that enable the legal system to exert "regulatory pressure" on the standard-setting process, while the standard system can also provide "evolutionary stimuli" to the legal system. Based on the above theoretical viewpoints, this paper holds that the institutional design in the governance of artificial intelligence standardization in China can be improved from the following four aspects: First, in terms of the functional status of artificial intelligence standards, it is necessary to promote the harmonious coexistence of technical standards and legal rules; Second, in terms of the content design of artificial intelligence standards, an evidence-based approach should be adopted to scientifically determine the themes and priorities of the standards. Thirdly, in terms of the formulation procedures of artificial intelligence standards, the principle of due process should be followed to the greatest extent possible. Fourth, in terms of the implementation supervision of artificial intelligence standards, a dynamic accountability mechanism should be established, including mechanisms for filing and review, assessment, and certification.
4.1 The functional status of artificial intelligence standards: Standards and laws coexist in harmony
Against the backdrop of the rapid development of artificial intelligence technology, the harmonious symbiosis between technical standards and legal rules has become a key issue in building an efficient and sustainable governance system. From the perspective of systems theory, the standardized governance of artificial intelligence is not merely a simple superposition of technical standards and legal rules, but rather a process in which the two interact in a complex relationship and achieve overall system optimization and coordinated operation through mutual influence, dynamic adjustment and feedback mechanisms. Therefore, to understand the coordinated relationship between technical standards and legal rules, it is necessary to start from their functional positions in the artificial intelligence governance system and examine the mechanisms and paths of their synergistic effects.
From the perspective of functional positioning, technical standards and legal rules play different but complementary roles in the governance of artificial intelligence. As the embodiment of the will of the state, the main function of legal rules lies in establishing the fundamental principles for the research and application of artificial intelligence, demarcating the boundaries of rights that cannot be crossed, and safeguarding the core public interests. It constitutes the "macro framework" of the governance system, providing a fundamental source of legitimacy and legal guarantee for the formulation and implementation of technical standards through universally applicable provisions. However, the inherent generality and stability of legal rules inevitably lead to a "pace problem" when confronted with the ever-changing technical details, that is, the speed of legal updates is far behind the speed of technological evolution. Technical standards can precisely make up for this structural shortcoming of the law. Technical standards jointly formulated by multiple entities such as industry experts and technical developers have stronger adaptability and flexibility, can respond quickly to technical demands, and provide specific and operational normative guidance. It delves into the technical details of artificial intelligence system design, data processing, risk management, etc., materializing macro legal principles into measurable and verifiable technical requirements, thereby effectively reducing uncertainties in the application of technology. Therefore, the function of technical standards in the governance system is the refined elaboration and dynamic realization of legal rules, and it is a key tool to ensure the compliance and security of technology application.
However, the relationship between technical standards and legal rules is by no means a one-way supplement, but rather a bidirectional interactive and co-evolving symbiotic relationship. The authority of law needs to be implemented through the technicality of standards, while the validity of standards must be confirmed and guaranteed within the framework of law. This dynamic interaction can be achieved through two institutionalized paths.
The first is that legal rules shape the direction, boundaries and core values of technical standards. To enhance the intrinsic connection between technical standards and legal rules, it is necessary to rely on the top-level design of "legalization of standards", and clearly incorporate the standard-setting rights or core requirements in key areas such as the basic security, core ethics, and major risk prevention of artificial intelligence into the normative framework of future artificial intelligence laws. The legislative structure of China's Food Safety Law can be drawn upon. On the one hand, the legal status, formulation principles, management system and requirements for the promotion and application of artificial intelligence standards should be clearly defined in the general provisions. On the other hand, special chapters or articles can be set up to clearly stipulate the nature of artificial intelligence standards in key areas, the formulating entities, the core content scope, the formulation procedures, and the legal responsibilities for violating mandatory standards, thereby endowing these core standards with clear legal effect. It is necessary to ensure the independence, professionalism and procedural legitimacy of standardization organizations at all levels in the development of artificial intelligence standards in accordance with the law, and guarantee that the standards they formulate can effectively meet the requirements and spirit of the law.
Second, technical standards promote the dynamic adjustment and forward-looking innovation of legal rules in a reverse way. Under the circumstances of the rapid development of artificial intelligence technology, the emerging technological forms, application models and potential risks often exceed the predictive scope and coverage capacity of current legal norms. As the "sensor" and "test platform" for technological development and market application, technical standards can sensitively detect these new changes, provide valuable practical data for the update, refinement and improvement of legal norms, and continuously shape facts reasonably. To elaborate, the effectiveness assessment of artificial intelligence standards in practical applications, risk exposure, and the resolution of disputes they trigger can all present to legislators the deficiencies and blind spots of the existing legal framework in addressing emerging challenges, providing targeted evidence bases for the formulation, revision, and even the formation of judicial interpretations of artificial intelligence laws. For instance, the new challenges posed by generative artificial intelligence in terms of personal information rights protection, intellectual property rights ownership, and content authenticity may be difficult to be comprehensively addressed by the current legal framework. In such circumstances, it is advisable to encourage the early formulation of relevant technical standards or group norms. By leveraging the application practices in "regulatory sandboxes" or specific pilot areas, data can be collected, experience accumulated, and effects evaluated. This will provide sufficient decision-making basis for legislators to formulate more adaptive, forward-looking, and operational legal rules while balancing innovation incentives and risk prevention and control. Promote the continuous development of the legal system. In international practice, the European Committee for Standardization and the European Committee for Electrotechnical Standardization have provided professional assessment opinions on the EU's "White Paper on Artificial Intelligence" and "Artificial Intelligence Act", suggesting that the definitions of key terms refer to existing ISO/IEC international standards and integrate mature risk management standards into the risk-based governance path of artificial intelligence.
4.2 Content Design of artificial intelligence standards: Scientifically determine the theme of the standards
Theoretically, scholars have proposed the "responsive law" to indicate a responsible, differentiated and selective adaptability, which means "a responsive institution still holds onto what is essential for its integrity, while also considering various new forces in its environment". In the standardized governance of artificial intelligence, the content design of standards is the core link to ensure the effectiveness and social adaptability of technical norms. With the continuous evolution and complexity of artificial intelligence technology, how to scientifically determine the themes and priorities of technical standards has become a major challenge in current standardization work. In this process, adopting "evidence-based methods" and "experimental-based methods" is an important path to promote the scientificity and rationality of standard formulation. These two methods not only provide sufficient data support and experimental verification, but also ensure that the standard content remains forward-looking and effective in the dynamically evolving technological environment.
4.2.1 Evidence-based approach: Strengthening data-driven standard design
Reliable evidence is usually regarded as an important basis for formulating policies and improving services. Indeed, without strong evidence and without in-depth exploration of various options and possible outcomes, it is almost impossible to achieve high-quality decision-making. The evidence-based approach emphasizes guiding the formulation of standards through data, research findings and practical experience. This approach requires standard setters to rely on a large amount of factual evidence and case analysis to ensure that the themes of the standards accurately reflect actual needs and minimize subjective assumptions or theoretical detachment from reality to the greatest extent. For the field of artificial intelligence, the formulation of standards not only requires theoretical support but also needs to be closely integrated with the actual application scenarios and specific problems existing in artificial intelligence technology.
First, establish an evidence base through big data analysis and evaluation. In the content design of artificial intelligence standards, the construction of an evidence base is of vital importance. Ideally, artificial intelligence standards are formulated based on scientific data, but in practice, there is very little data that can actually be used for standard-setting decisions. Big data can generate ecologically effective and high-quality scientific evidence. By analyzing a large number of technical reports, research papers, market research data and industry feedback, it can help standard setters accurately identify technological development trends and potential governance risks. For instance, the standard design for the transparency and interpretability of artificial intelligence algorithms can be based on big data to analyze the behavior of existing artificial intelligence models, evaluate their transparency and interpretability, and thereby provide data support for the formulation of standards. Meanwhile, evidence-based methods can help identify pain points and loopholes in the use of artificial intelligence technology, ensuring that the standard content is scientific and practical. For instance, in response to the challenges of artificial intelligence in data privacy, targeted technical standards can be established by analyzing past cases of privacy leaks to ensure the legality and security of the data processing process.
Second, evidence-driven determination of standard priorities. Evidence-based methods can not only help determine the subject matter of technical standards, but also provide a basis for the selection of priorities. In the wide application of artificial intelligence technology, various fields are confronted with different technical challenges. The formulation of standards should give priority to addressing those far-reaching and urgently in need of regulation issues. For instance, when artificial intelligence is applied to sensitive fields such as healthcare and finance, standard topics like data privacy protection and algorithm fairness may be prioritized. Evidence-based analysis can help assess technical risks in different fields, thereby providing a scientific basis for prioritizing standard content and avoiding the loss of social interests caused by the lag in standard formulation. In comparative law, the National Institute of Standards and Technology (NIST) of the United States divides the artificial intelligence standard system into three dynamic levels based on technological maturity and regulatory urgency: The first is the fundamental and timely standard field, which covers the terminology and classification of artificial intelligence, measurement methods and indicators, transparency mechanisms, risk management mechanisms, security and privacy, training data, etc. Secondly, there are standard areas that have regulatory foresight but still require in-depth research to reach a consensus, such as quantitative assessment of the resource consumption of artificial intelligence models. The last are the standard fields of great significance but still requiring breakthroughs in fundamental theories, such as the technical implementation of interpretability and explainability, and efficient human-machine collaborative configuration.
Third, scientifically set the content of standards at different levels. As mentioned earlier, in the practice of artificial intelligence standardization in our country, standards at different levels, including national standards, industry standards, local standards, and group standards, have all carried out standardization activities related to artificial intelligence. However, China's Standardization Law does not set clear content scopes for standards at different levels, which has led to conflicts in standard content and violations of legal provisions in practice. For instance, the China Consumers Association once proposed to strengthen the supervision of group standards in the field of food safety. The reason lies in the fact that "group standards inherently carry a strong characteristic of group interests. Participating enterprises may formulate regulations that are beneficial to the enterprise based on their own positions, or they may intentionally violate relevant procedural rules to achieve special purposes." In view of this, this article holds that in the future legislative process of artificial intelligence in our country, the basis provided by big data analysis and evaluation should be combined to make directional regulations on the content scope of various standards. For instance, terminology standards and basic general standards that involve numerous unspecified users or consumers and public interests should not be determined by industry standards, local standards or group standards, but by national standards, to prevent industry standards and group standards from becoming tools for enterprises to collude and harm the legitimate rights and interests of users or consumers.
4.2.2 Experimental-based approach: Verify the adaptability and effectiveness of the standard through experiments
In the field of new technology regulation, theory and practice have proposed various forms of experiments, including experimental governance, experimental legislation and experimental innovation. Among them, experimental governance aims to explore and evaluate the implementation effects of policies at different governance levels, especially how policies respond to the needs of different social groups in actual operation, how they promote public interests, and how they balance conflicts among different interests. Experimental legislation focuses on verifying the effectiveness and operability of specific legislative measures in addressing specific regulatory issues. Experimental innovation mainly focuses on the launch of new products and services as well as the effects and side effects brought about by these innovations. Article 18 of the Standardization Law of the People's Republic of China stipulates that when formulating group standards, investigations, analyses, experiments and arguments related to the standards shall be organized. In recent years, in the governance of technologies such as artificial intelligence and robotics, experiment-based methods have gradually gained attention. In terms of the standardized governance of artificial intelligence, the trial-based approach emphasizes verifying the adaptability, effectiveness and other effects of technical standards through experiments, and promptly identifying and correcting potential problems that may arise during the implementation of standards.
First, set reasonable goals for experimental standardization. The core objective of experimental standardization is to conduct a comprehensive test and evaluation of the effectiveness and effect of standardization. The effectiveness of artificial intelligence standards refers to their direct impact on achieving specific goals, such as ensuring that the transparency and interpretability of artificial intelligence systems are achieved as expected. However, the effects of artificial intelligence standards are not limited to this. They also involve other possible consequences that may arise during the implementation of the standards, especially those unexpected ones. Standards may be effective in setting their goals, but in practice they may cause some unforeseen problems or side effects. These side effects may have a negative impact on certain groups and may even exacerbate social injustice. Therefore, it is particularly important to test the potential effects of artificial intelligence standards.
Second, utilize artificial intelligence regulatory sandboxes to achieve experimental standardization. The regulatory sandbox was originally a regulatory model originated from the fintech field, aiming to allow enterprises to test their innovative products, services or business models under certain conditions, while ensuring that potential risks to consumers and society are controllable. In recent years, this model has gradually been extended to the field of artificial intelligence, providing new ideas and practical scenarios for promoting experimental standardization. The technicality and practicality of artificial intelligence standards make them more suitable for early testing in sandbox environments. In this way, the regulatory sandbox can build a bridge between the formulation of laws and standards, providing reliable data and practical basis for legislators and technical standard setters, thereby ensuring the standardized development of artificial intelligence technology. Furthermore, before the artificial intelligence standards are officially approved, the regulatory sandbox can conduct experimental tests on the standards by simulating the actual operation conditions in real environments. This method can not only verify the technical feasibility of the standard, but also evaluate its applicability and effectiveness in different application scenarios.
4.2.3 The formulation procedure of artificial intelligence standards: Follow the principle of due process
The principle of due process, as a fundamental guideline followed by administrative and legislative activities in modern countries under the rule of law, its core essence lies in ensuring the openness, fairness and impartiality of the decision-making process, while providing all direct or indirect stakeholders with the opportunity to fully express their opinions, participate in consultations and receive reasonable responses. Although technical standards, especially a large number of recommended standards, have different legal status from coercive force and strict legal norms, given their profound impact on market behavior, technical paths and even social well-being, their formulation procedures must also strictly follow and fully embody the essence of the principle of due process, serving as the basis for obtaining legitimacy and social recognition. As scholars have pointed out: "The formation of internal administrative law in the field of standardization neither stems from the endogenous institutional consciousness of the industrial sector nor from the autonomous evolution of procedural rules in the engineering field." Its development is actually a long and complex process of transplanting norms, through the absorption and sometimes even the forced introduction of norms and principles from multiple legal sources. The formulation of artificial intelligence standards, due to its high technical complexity, potential ethical risks and extensive and profound impact on all levels of society, has put forward more stringent requirements for the legitimacy of its formulation procedures. This not only implies the completeness of the procedural form, but also points to the substantive level of power checks and balances and democratic participation. Chapter Two of China's Standardization Law has already made principle-based provisions on "the formulation of standards", emphasizing that the principles of "openness, transparency and fairness" should be followed. The EU's "Regulation (EU) No 1025/2012" also specifically stipulates the core requirements of "transparency and stakeholder engagement" in Chapter Two. However, in the face of possible problems such as the easy expansion of the discourse power of technical elites and the insufficient substantive participation of the public and small and medium-sized enterprises in the formulation of artificial intelligence standards, we need to break through the traditional procedural requirements and explore the construction of a more inclusive, empowering and substantive due process guarantee system.
The first is to establish a multi-party substantive participation mechanism under the framework of co-regulation. The legitimacy of the procedural dimension is the product of the standard-setting process and involves which subjects participate under what conditions. The formulation of technical standards, especially those for artificial intelligence which have a profound impact on social and economic life, must ensure the substantive participation of the broadest range of stakeholders, rather than merely remaining at the level of formal consultation or the sporadic attendance of a few representatives. The multiple participants not only include technical experts, representatives of leading enterprises in the industry, and relevant government regulatory departments, but also should focus their attention and resources on absorbing and empowering those participants who are usually in a disadvantaged position in the standard-setting game. Such as ethical scholars, representatives of social organizations (especially consumer rights protection organizations, environmental protection organizations, etc.), representatives of small, medium and micro enterprises, as well as representatives of the general public. Although Article 7 of China's Standardization Law encourages multiple entities to participate in standardization work, in practice, how to ensure the "substance" and "effectiveness" of the participation of multiple entities remains the key. The EU's Standardization Regulation has further requirements in this regard, emphasizing that standardization organizations must take active measures to encourage and promote the representation and effective participation of all relevant parties, especially micro, small and medium-sized enterprises, consumer representatives and social stakeholders (such as the environment and trade union organizations), such as lowering their participation thresholds through simplifying procedures, providing technical assistance and reducing fees.
Second, ensure a high degree of transparency and information disclosure throughout the entire process of standard formulation. Within the framework of administrative law, transparency is a key element of due process. This element prompts administrative agencies and related organizations to proactively, promptly and comprehensively disclose their decision-making processes, bases, reasons and final results, thereby accepting external supervision and evaluation. The principle of due process sets requirements for the standard formulation process. From project initiation, drafting, soliciting opinions, technical argumentation, review to final release and subsequent interpretation and revision, every key link should be made as public as possible, and convenient and effective feedback channels should also be provided for the public. In the field of artificial intelligence, which is highly technology-intensive and has a profound social impact, the standard-setting process should serve as a model for transparent governance. A unified, authoritative and easily accessible "Artificial Intelligence Standard Information Disclosure Platform" can be established. The system releases all draft national, industry, local and influential group standards for artificial intelligence that are currently under development or have been released, along with their compilation explanations, main differences of opinion and handling situations, technical verification reports, ethical impact assessment reports and other core documents. During the public consultation period for the draft standard, sufficient time should be given to all sectors of society for study and feedback. Standard-setting bodies should publicly explain the reasons for the public opinions collected and their adoption in an appropriate manner. At the same time, efforts should be made to gradually promote open access to standard texts. Especially for technical standards involving public safety, citizens' basic rights, and those formed with government financial support, their public welfare attributes must be strengthened. By eliminating access barriers such as "paywalls", the realization of the public's right to know and the smoothness of supervision channels can be guaranteed.
The third is to improve the cross-disciplinary expert review and decision-making mechanism. In accordance with the principle of due process, the formulation of technical standards not only requires extensive solicitation of opinions from all parties, but also relies more on rigorous, independent and widely representative expert reviews. The food Safety National Standard Review committee system stipulated in Article 28 of China's Food Safety Law reflects this spirit. It requires that the review committee be composed of experts from various fields such as medicine, agriculture, food, nutrition, biology, and environment, as well as representatives from relevant departments, industry associations, and consumer associations, to conduct a comprehensive review of the scientificity and practicality of the draft standards. Due to the fact that artificial intelligence standards involve cutting-edge technologies, complex legal issues and profound ethical implications simultaneously, their expert review mechanisms should highlight interdisciplinary nature and diverse perspectives. In the process of formulating artificial intelligence standards, a highly credible interdisciplinary expert committee should be established or authorized to widely incorporate experts from technical fields such as computer science, data science, and control theory. At the same time, it is necessary to ensure the in-depth participation and substantive voice of experts from social sciences and humanities fields such as law, ethics, sociology, psychology, political science, and communication studies. The review scope of the expert committee should not be limited to technical feasibility and economic benefits only. It is also necessary to conduct a comprehensive and prudent assessment of the potential social impact, ethical risks, legal conflicts, etc. that the draft standard may bring. In terms of the decision-making mechanism, the formation of resolutions should follow procedural requirements such as majority decision (such as 4/5 of the attending experts) or consensus through consultation. The review process should be recorded in detail, and relevant conclusions and reasons should be made public to society.
Fourth, improve the mechanism for lodging objections to standards and conducting regular reviews. Throughout the entire process of formulating and implementing technical standards, some stakeholders may raise objections or appeals regarding the specific provisions, formulation procedures, interpretation and application, or implementation effects of the standards. The principle of due process requires providing clear, convenient, fair and effective handling channels for these objections and appeals. Although Article 35 of China's Standardization Law stipulates a "complaint and reporting mechanism", mainly requiring the administrative department for standardization and others to publicly accept the complaint and report it and arrange personnel to handle it, it is more inclined towards supervision at the administrative level. In the process of formulating and implementing artificial intelligence standards, in addition to strengthening the complaint and reporting mechanism led by administrative authorities, standardization organizations at all levels should also establish procedures for lodging objections to standards and conducting regular reviews (1) Formulate corresponding procedural rules and clearly stipulate that during the public announcement period of the draft standard and within a certain period after the standard is released, stakeholders may raise formal objections or requests for review regarding the substantive content, procedural flaws or potential negative impacts of the standard. (2) A relatively independent objection handling team or committee can be established internally to register, investigate and (if necessary) hold hearings on the received objection appeals, and provide a written response within the prescribed time limit, explaining the reasons. (3) Establish a regular review system for artificial intelligence standards, proactively assess the effectiveness, adaptability and potential risks of the standards, and promptly initiate revision or abolition procedures based on technological development, legal changes and social feedback to ensure the "timeliness" and "responsiveness" of the standards.
4.2.4 Implementation supervision of artificial intelligence standards: Establish a dynamic accountability mechanism
Transforming principles into practice has become one of the most urgent challenges in artificial intelligence
governance, and this is also the problem faced by the standardized governance of artificial intelligence. To promote the comprehensive implementation of artificial intelligence standards and enhance the "output legitimacy" of artificial intelligence standards, it is particularly important to establish a sound supervision and accountability mechanism. This mechanism should not only ensure that the standards comply with legal, ethical and social requirements, but also be dynamic and operational to adapt to technological innovation and changes in application scenarios. Under the model of co-regulation, taking the current standard filing and review system in China as the starting point, a public-private collaborative post-implementation assessment and review mechanism should be established, thereby providing a solid institutional guarantee for the "regulatory effectiveness" of artificial intelligence standards.
First, establish a standardized review mechanism based on the filing and review system. China's filing and review system has achieved good results in multiple fields, especially in the review of administrative regulations and normative documents, it has played an effective role in ensuring legality. In terms of standardization activities, China generally requires that national standards should be reviewed by the Standard Review Committee before approval and release. However, for local standards, industry standards and group standards, there is no clear requirement as to whether they need to be reviewed. For instance, in the field of food safety, according to the provisions of China's Food Safety Law, national standards should be reviewed and approved by the National Food Safety Standard Review Committee, while local standards and enterprise standards only need to be filed with the health administrative department. Given that artificial intelligence technology may have extensive technical, economic and social impacts, if the definition, measurement or method of defect becomes the standard for the development of artificial intelligence technology, related consumer products or services may enter the market, human body and the environment in a potentially harmful way. Therefore, this article holds that in the governance of artificial intelligence standardization, a more responsive standard review system should be established: (1) The Standardization Administration of the People's Republic of China and the Ministry of Industry and Information Technology should form a national standard review committee for artificial intelligence. The members of the committee should be representative and authoritative, including experts and scholars from various fields as well as representatives from industry associations and consumer protection associations. On this basis, national standards should be reviewed by the National Standard Review Committee for Artificial Intelligence before being approved and released. Local standards concerning an uncertain majority of users or consumers and the public interest should be filed and reviewed by the National Standard Review Committee for Artificial Intelligence. (2) The industrial and information technology departments of the people's governments of provinces, autonomous regions and municipalities directly under the Central Government shall establish corresponding standard review committees to conduct filing and review of group standards and enterprise standards that concern an uncertain majority of users or consumers and the public interest. (3) When conducting reviews, the Artificial Intelligence Standards Review Committee should focus on the following two aspects: The first is compliance review. Check whether the standards comply with national laws, regulations and policy requirements, and whether they respect and safeguard the basic rights of citizens. The second is ethical review. Ensure that the standard formulation process adheres to ethical principles, especially when it comes to areas such as automated decision-making and the interaction between artificial intelligence and humans. Review whether the standards comply with ethical norms and basic ethical requirements such as fairness and transparency.
Second, improve the dynamic monitoring and evaluation mechanism for standards. Any control system, due to information interference caused by environmental changes, will always cause the output state of the controlled system to deviate from the given state. Therefore, the feedback control method can reduce deviation information and improve the stability of the controlled system's operation process. Given that technical standards are usually the result of negotiations and compromises among relevant participants, once an agreement is reached, the standards tend to settle and stagnate rather than remain flexible and open as new information or developments emerge. Therefore, it is necessary to enhance the responsiveness of technical standards through feedback control methods. For instance, Article 32 of China's Food Safety Law stipulates that the health administrative departments of the people's governments at or above the provincial level shall, in conjunction with the food safety supervision and administration, agricultural administrative and other departments at the same level, respectively conduct follow-up evaluations on the implementation of national and local food safety standards, and revise food safety standards in a timely manner based on the evaluation results. With the continuous development of artificial intelligence technology, new technologies, application scenarios and social demands are constantly emerging, and existing standards may face the risk of lagging behind or being incompatible. Therefore, establishing a dynamic monitoring and evaluation mechanism to continuously detect and assess the implementation effect of artificial intelligence standards is the key to ensuring the effectiveness of the standards. Specific measures include: (1) Post-implementation assessment. After the standard is released and enters the implementation stage, its actual effect should be evaluated regularly. The assessment content includes whether the standard effectively addresses potential safety hazards in the application of technology, whether it promotes technological innovation, and whether it protects public interests, etc. (2) Regular feedback mechanism. Through a regular feedback mechanism, opinions from enterprises, industry organizations, technical experts, legal experts and the public are collected to assess potential problems that may arise during the implementation of the standards. For instance, some standards may be overly complex in practical operation or unable to adapt to the application of emerging technologies. The feedback mechanism can promptly identify and correct these issues.
Third, establish a third-party inspection and certification mechanism. Third-party inspection and certification, as an independent third-party force from technology developers and government regulatory agencies, can leverage its advantages in resources, expertise, information and diversity to objectively assess the compliance of artificial intelligence systems, ensuring the transparency, traceability and compliance with social ethical standards of the technology. At present, in the field of personal information protection, the governance effectiveness of third-party certification is gradually attracting attention. Article 62 of the Personal Information Protection Law of the People's Republic of China stipulates that the national cyberspace administration department shall coordinate relevant departments to promote the construction of a socialized service system for personal information protection in accordance with the law, and support relevant institutions in conducting personal information protection assessment and certification services. Therefore, introducing a third-party inspection and certification mechanism into the standardized governance of artificial intelligence not only has practical significance but also can provide beneficial experiences for reference. However, when introducing the third-party certification mechanism into the standardized governance of artificial intelligence, it is necessary to be highly vigilant about its potential risks of "market failure" and challenges to "independence", to prevent the situation where certification bodies, due to excessive pursuit of commercial interests, are either technically constrained by the certified party or form a "revolving door" relationship with the certified party. This leads to situations such as a decline in certification quality, standards being undermined, and "bad money driving out good". China can prudently promote the third-party inspection and certification mechanism for artificial intelligence standards from the following two aspects: First, stipulate the certification and inspection mechanism related to artificial intelligence through laws, administrative regulations, departmental rules and technical standards, clearly defining the conditions, procedures and contents of inspection and certification; Second, in accordance with the provisions of the "Regulations on Certification and Accreditation", we will actively promote the establishment of professional certification bodies in the field of artificial intelligence, and at the same time clarify their establishment conditions, legal responsibilities, etc., to ensure their professionalism, impartiality and independence.
5. Conclusion
The rapid development of artificial intelligence not only brings opportunities for technological innovation, but also triggers numerous legal, ethical and social issues. Against this backdrop, the traditional single regulatory model is facing huge challenges, and there is an urgent need to explore more flexible and efficient regulatory paths. This paper proposes that regulating artificial intelligence through technical standards, especially by adopting a model based on co-regulation, can effectively overcome the deficiencies of administrative regulation and self-regulation, and provide a governance path that is adaptive and responsive. Technical standards are the recipes we create for reality. They not only shape the physical world around us, but also our social life and even human beings themselves. Technical standards have the characteristics of being interdisciplinary and multi-field. They can not only fill the gaps in the existing legal framework but also promote communication and coordination among different stakeholders. Theoretically speaking, in the highly complex and dynamically changing field of artificial intelligence, technical standards, as a regulatory tool, can regulate the direction of technological development and promote the healthy development of the industry without overly interfering in technological innovation. However, in practice, the standardized governance of artificial intelligence also faces predicaments in terms of legitimacy, scientificity, coordination and effectiveness. The standardized governance of artificial intelligence based on administrative regulations is prone to fall into the predicament of excessive intervention and inefficient execution, and is difficult to adapt to the rapid changes in technological development. While self-regulation has achieved certain results in some industries, it may also have the problem of favoring commercial interests while neglecting public interests. Therefore, this paper advocates making up for the above-mentioned deficiencies through a cooperative regulatory model. Co-regulation emphasizes the collaborative efforts among the government, enterprises, academia and the public, forming a governance system with the participation of multiple subjects. This can ensure the balance between technological development and social responsibility while maintaining market vitality. Co-regulation can effectively address the current predicaments in the governance of artificial intelligence standardization in terms of legitimacy, scientificity, and coordination through relevant institutional designs, and fully stimulate and release the governance effectiveness of technical standards.
The original text was published in the 4th issue of "Comparative Law Studies" in 2025. Thanks to the wechat public account "Comparative Law Studies" for authoriting the reprinting.

