[author]Cai Xingyue
[content]
Algorithm Regulation: From Normative Regulation to Layered Regulation
Regulating algorithms with traditional legal norms, especially by defining them through concepts, presents a fundamental problem: a lack of direct correspondence and smooth communication between norms and algorithms. This is because norms regulate human behavior, while algorithms are a form of machine decision-making. Norms, as textual expressions based on human language, struggle to define another form of digital expression based on machine language. Therefore, the in-depth development of algorithm regulation will encounter the continuously intensifying difficulty of matching and communicating between text and numbers. This results in varying effectiveness of algorithm regulation across the horizontal dimension of generality and specificity of algorithms, and the vertical dimension of phenomenon layer and hidden layer. Constructing a layered structure for algorithm regulation will help us maintain a clearer cognitive judgment of algorithms and their regulation, and discover the power and limitations of algorithm regulation. Only by transforming normative regulation based on legality judgment into layered regulation dedicated to algorithmic trustworthiness can effective regulation of algorithms be achieved.
Keywords: Algorithm; Algorithm Regulation; Normative Regulation; Layered Regulation
I. Normative Regulation of Algorithms
Modern jurisprudence operates within the basic framework of "norm-behavior," with its logical starting point being the explanation, guidance, adjustment, and limitation of behavior through norms . The premise of this cognitive logic lies in distinguishing between the normative world and the living world, establishing a dual structure between the two, so that behavior can be subsumed by norms. The fundamental purpose of law is to regulate human behavior in the living world and to establish order through this regulatory technique . This approach has been very effective in the interaction between the normative world and the living world, where the intended content of each concept in the normative system can find a corresponding behavior or fact in the living world . Because the carrier of the normative system is human language, and language itself originates from human society, language can correspond one-to-one with the living world. Through certain forms of discourse and symbolic expressions, the meaning of norms can be smoothly transmitted to the living world . At the same time, the meaning of human real life can also flow back and be mapped into the normative world. This form of discourse and symbolic expression is legally referred to as a concept. Law is a form of "conceptual" thinking, and it is the generalizing ability and expressive function of concepts that bridge the interactive channel between norms and behaviors of different natures . Concepts become the intermediary between norms and behaviors, extracting the rationality within norms and transmitting this normative rationality to behaviors; at the same time, they feed back the facts and values, rationality and irrationality within behaviors into the normative coordinates for judgment .
Of course, we have more than just individual concepts. People can inject more wisdom into concepts, discover the relevance between concepts, refine the relationships between similar behaviors and phenomena into typified expressions, and further condense more self-consistent and coherent conceptual arrangements and type matrices on the basis of types, integrating them into a comprehensive system . Concept-type-system builds a form of expressive discourse with higher cognitive breadth, intensity, and precision between norms and behaviors. Legal professionals have constructed modern jurisprudence into a unique and effective knowledge space that has well accomplished the projection from norms to behaviors .
Using this method, the regulation of human behavior by norms has always been "invincible" . From the perspective of normativism, all legal behaviors are objects of regulation, which can be expressed by legal concepts and covered by legal concepts and their expanded normative types and normative systems. After the rise of machine learning, algorithms have participated in human production and life practices, gradually becoming part or all of decision-making systems. Algorithms have also gradually become objects of normative regulation, just like human behavior, and are equally applied according to the logic of human behavior . It is precisely because of adopting this "unbeatable" unified approach that modern jurisprudence has encountered a problem in the algorithmic world of the artificial intelligence era .
The symptoms of this crisis are concentrated in the lack of direct correspondence and smooth communication between norms and algorithms . On the one hand, norms are textual expressions of abstract thinking extracted from the living world, and their basic mode of composition is language and writing. The meaning of norms is understood through discourse expression and verbal interpretation . Understanding norms relies on direct comprehension of language and writing, and applying norms relies on the textual interpretation of language and writing. This is a system of interpretation and application using text as a code . It can be applied to behavior because the expression of human behavior also uses the same language and writing system. The uniformity in the expression of weights and measures ensures the correspondence between norms and behavior, making normative interpretation of behavior unobstructed . However, algorithms are completely different. The underlying code of algorithms is not text but numbers. Their mode of composition is calculation, completed by the arrangement of a set of digital status bits or a digital matrix . Text and numbers are entirely two sets of symbolic systems. On the other hand, the logical development of norms relies on the causality of common sense and consensus, which originates from the experience summary and wisdom accumulation of human practice, while the causality of algorithms is a functional expression, originating from the combination and superposition of mathematical axioms and formulas . The logic of their operation is vastly different, representing two different rational presentations of intelligence .
The lack of direct correspondence and smooth communication between norms and algorithms brings about difficulties in algorithm regulation: how to regulate algorithms with norms? For example, when we use the principle of transparency to require an algorithm, what exactly are we asking for? When the mathematical complexity of an algorithm far exceeds the cognitive ability of ordinary people, is the incomprehensibility caused by this complexity considered a form of opacity? When all algorithm designs and architectures are clear, but the results of operations driven by massive data are unpredictable, can this be considered a form of opacity? When the intelligence of the algorithm itself is expressed through the calculation of hidden layers, can this be considered another form of opacity (for example, the basic principle of deep learning relies on the calculation of convolutional hidden layers)? What kind of transformation does an algorithm need to undergo to be considered transparent by norms? And when the algorithm's own internal mathematical principles and formula logic do not allow it to be forcibly transformed, should the algorithm be abandoned? These indicate that norms have already revealed their fragility and superficiality in the face of algorithms . "The problem is how law intervenes in the regulation of algorithms. If observed from the perspective of internal regulation, law seems unable and should not adjust algorithms, just as law cannot adjust the internal intentions of natural persons" .
II. Horizontal Dimension of Algorithm Regulation: Generality and Specificity
(I) Generality of Algorithms
The crisis encountered by norms in the realm of algorithms has caused their regulation of algorithms to begin and end with concepts . As mentioned earlier, through concepts, norms and their objects establish a dual structure. It is precisely after the normative object is subsumed into a highly generalized concept that the relationship between the norm and the normative object is shifted to the relationship between the norm and the concept . Through this operation, the norm gains the advantage of subjectivity (dominant position) and can regulate the object through this concept; at the same time, this concept is gradually interpreted, and then replaces the normative object itself, becoming the regulated object (subordinate position), accepting scrutiny and evaluation from the normative subject . As a result, the power imbalance between "up-down," "master-servant," "center-periphery," and "primary-secondary" is continuously reinforced, and the conventional techniques and universal operations of norms then penetrate directly into the normative object, thereby completing regulation . Algorithms have always encountered such a "routine." When an algorithm is considered as a conceptual meaning composed of the two characters "算法" (algorithm), this concept of "algorithm" replaces the many rich and diverse forms of intelligence in the field of artificial intelligence, such as program design, mathematical expressions, and code architectures, becoming a regulable collective concept . It is precisely by adopting this concept that the prerequisite for regulating algorithms—the integrated expression of algorithms—is met, allowing norms to be applied to algorithms and exert their consistent regulatory function . The normative regulation of algorithms follows a norm-centered structural epistemological stance, advocating that algorithms be treated as an object concept and a life fact within the normative analysis framework. Through such a definition, algorithms, as conceptual objects in the form of facts, are interpreted from within the normative system, and then the regulatory relationship between norms and algorithms is reconstructed through methods such as subsumption . This is an epistemological framework that appeals to pure form, in which norms and algorithms are clearly distinct. Norms appeal to a pure form of oughtness, regarded as a thinking expression based on abstract pure rationality, embodying the integrated effect of legal values. They are superior to algorithms, serving as an interpretive and transformative tool, providing intellectual functions for algorithms and demonstrating the logical space of what "ought" to be; while algorithms are regarded as the collective expression of the phenomenon of artificial intelligence, and the richness of artificial intelligence is absorbed into this corresponding concept, thereby establishing a consistent empirical world of "what concept this algorithmic decision belongs to," creating general rules and scenarios for the unified application of norms . Thus, an epistemological framework with a clear dichotomy between ought and is, value and fact, subject and object is formed. It is within this framework that norms demonstrate their consistent conceptual influence on regulatory objects, incorporating algorithms as a whole into the scope of legal order adjustment without distinction .
Based on the above epistemological stance, the normative approach to algorithm regulation mainly involves corresponding the standard interpretation of norms to algorithms as facts through the method of subsumption, using standardized application to support the rationality of algorithm regulation . This solves the regulatory problem of the generality of algorithms, but it also objectively creates a vulnerability in algorithm regulation, as norms inevitably overlook the specificity of algorithms brought about by technical complexity . On the one hand, each algorithm represents a logic, and the conceptual aggregation of algorithms does not represent the aggregation of algorithms; there is no consistent expression that governs all algorithms . On the other hand, algorithms, as a digital representation of mathematical thinking, are constantly evolving. They demonstrate the rich diversity of mathematical thinking methods and have always maintained a state of rational advancement, without adhering to a fixed state of rationality . To some extent, the degree of rationalization of the complex intelligence contained in various algorithms far exceeds the level of general facts in the living world. Using a single concept in the ideal world created by norms to broadly subsume the vast and complex algorithmic world and unify the regulatory path for all algorithms can only be said to be a compromise in dealing with highly complex technical thinking . This can only be regarded as the first step in regulating algorithms . The advantage of this step is that, through conceptual construction, norms successfully grasp the generality of algorithms, uniformly regulating the principles, methods, and paths of various algorithms that have inherent consistency. Norms are based on the generality of various algorithms . They extract certain standardized content from various algorithms, treating them as "quasi-behaviors" of artificial intelligence, and regulate these "quasi-behaviors" of artificial intelligence using the same means as regulating human behavior, thereby establishing the dominant position of norms and allowing algorithms to still be "interpreted-transformed" within the existing legal framework, remaining within the scope of traditional legal interpretation . The disadvantage of this step is that the normative regulation of algorithms through conceptual subsumption will remain superficial, unable to penetrate the deep and complex mathematical logic built by algorithms. Normative rationality will encounter a rebuttal from algorithmic rationality, and may even be undermined by it . The highly abstract conceptualized regulation of the generality of algorithms cannot resolve the various concrete problems of various algorithms in real-world applications. The development of algorithm regulation must move from the generality of algorithms to the specificity of algorithms .
(II) Specificity of Algorithms
Regulation based on the generality of algorithms merely pre-sets algorithms within a cognitive framework lacking in legal significance, thereby filtering out their rich connotations and enormous potential . To enhance the effectiveness of algorithm regulation, it is necessary to step out of this framework and engage in dialogue at the level of each specific characteristic of algorithms, thereby opening up the deep structure of algorithm regulation and finding a more stable regulatory "singularity" for it . The move from the generality of algorithms to the specificity of algorithms is not a disruptive paradigm shift, but rather an inevitable direction of its logical extension . The true governance of algorithms must delve into the internal aspects of each specific characteristic of algorithms, constructing an interactive relationship between norms and algorithms on a more three-dimensional framework . The specificity of algorithms is manifested in the different decision-making characteristics that the application of different algorithms will present due to the diversity and complexity of algorithm technology . The specificity of algorithms is built on two basic dimensions: the space of scenarios and the distance of time . On the one hand, the richness of digital scenarios has given rise to an ecosystem of algorithmic diversity. Algorithms are always embodied as solutions in specific digital scenarios, and their essence is a set of specific thinking strategies . Different algorithms express their own specificity through their strategies and applications in different scenarios . It is the "concreteness" of digital scenarios that determines the "specificity" of algorithm design and application . As China's digital practice becomes increasingly rich and digital scenarios become more abundant, the algorithms prescribed by scenario practice show diversified specific characteristics . On the other hand, technology-driven algorithms are constantly iterating, and the data structures and technical routes presented by algorithms will continue to be updated . This makes algorithms not static; they are not a fixed entity but should be regarded as an integrated state of their own historical accumulation over a long period of time evolution . In other words, every algorithm is a synthesis of the past and the present, and they have never stopped extending in the time dimension .
Correspondingly, the regulation of algorithms must also be integrated into these two dimensions, that is, the possibility of regulation is presented in time and space . This means that algorithm regulation is not limited to a general perspective but needs to be designed within a new framework completely different from conceptual regulation . It is necessary to dissolve certain specific rules into the algorithm, making them a part of the algorithm's operation, as a "consideration factor" in the algorithm's thinking strategy, that is, to make the algorithm generate "normative awareness" . Specifically, it means designing certain specific rules for the risk of illegality of algorithms in specific scenarios, transforming these rules into digital expressions, embedding them into the algorithm design program, and ensuring that they are synchronously updated in the algorithm's updates and iterations, so that "norms-algorithms" are integrated in time and space, rather than merely externally regulating algorithms . This regulatory method will effectively solve the problems of superficiality and weakness in the normative regulation of the generality of algorithms by legal concepts, and advance algorithm regulation to the second level . It focuses on the specific characteristics of algorithms in specific situations, transcends the abstract form of general concepts, examines the correspondence and communication relationship between norms and algorithms, and increases the normative allocation in the internal architecture of algorithms by transcribing textual symbols into digital symbols, thereby completing the regulation of algorithms . It can be said that carrying out regulation on the specificity of algorithms is a major turning point in algorithm regulation, realizing the reproduction of norms in the specificity of algorithms from the production of norms to regulate the generality of algorithms, subverting the traditional logic of algorithm regulation, and endowing the algorithmic world with new normative significance . The regulation of algorithms must find a more legally significant positioning, and the framework structure of regulation must be expanded on a broader cross-section of algorithms, taking into account the specificity of algorithms. Only in this way can norms truly exert the same function and effectiveness in the field of algorithms based on mathematical logic as they do in the field of human life . Entering the field of algorithmic specificity, algorithm regulation can jump out of the quagmire of the "regulation-algorithm" dual structure that simply relies on concepts or regulations, and get rid of the powerless state of merely staying at the level of general regulation .
III. Vertical Dimension of Algorithm Regulation: Phenomenon Layer and Hidden Layer
Algorithms not only exhibit the above-mentioned differences in generality and specificity but also possess different levels of depth in the vertical dimension . Especially for deep learning algorithms, their understanding requires a multi-dimensional approach. Currently, machine learning algorithms, especially deep learning technology, have been widely used in many scenarios of production and life practice, such as generative artificial intelligence (Generative AI) represented by ChatGPT and the core decision-making systems of autonomous vehicles, which are being hotly debated in theoretical and practical circles, all use deep learning algorithms . Based on the depth differences explored by humans, these types of algorithms can be divided into the phenomenon layer and the hidden layer . The phenomenon layer of an algorithm represents the basic form of the algorithm that is knowable and visible to humans, mainly referring to algorithm design and architecture . This is the initial form of establishing an algorithm and the computational thinking programmed by humans using mathematical logic . Since it belongs to the presentation of the algorithm's principles and is mainly based on manual design and arrangement, it is inevitably understood by humans . It can be said that the phenomenon layer of an algorithm is the interface for effective interaction between human intelligence and artificial intelligence, and the intersection domain of the two intelligences . The hidden layer of an algorithm represents the core level where the algorithm performs deep computations, mainly referring to the hidden computation level that is designed based on algorithmic thinking and then washed and filtered by big data . For example, the hidden layer in deep learning is the core part of the algorithm that realizes artificial intelligence and the area that distinguishes artificial intelligence from human intelligence and is not mastered by humans . Because it is the deep layer of the algorithm that has been continuously accumulated and precipitated by large-capacity operations and complex calculations, and it is the landmark area that distinguishes it from human intelligence, it is difficult for humans to recognize . From the perspective of human intelligence, human cognition involves the phenomenon layer of algorithms and stops at the hidden layer; from the perspective of artificial intelligence, the phenomenon layer is a human-set a priori, while the hidden layer is the a posteriori of the machine's autonomous computation after possessing the so-called "free will" . The distinction between the phenomenon layer and the hidden layer of an algorithm is not a deliberate artificial division but a natural distinction based on a deep understanding of the principles of artificial intelligence technology . When intelligent agents with two different constituent modes of human intelligence and artificial intelligence interact with each other, there is both an intersection and their own exclusive areas . From the human perspective, there is a gradient change from knowable to unknowable in the direction of human intelligence's exploration of artificial intelligence: the phenomenon layer is the knowable interval, which is the communication interface for human-machine interaction . At this level, humans endow artificial intelligence with an artificial constructive power by programming the rules for machine operation, so that algorithms can be born . Therefore, the phenomenon layer lays the foundation for artificial intelligence and becomes the architectonics of algorithmic thinking . With the help of the knowability and interactivity of the phenomenon layer, humans and machines maintain the possibility of mutual influence and mutual assistance .
However, having only the phenomenon layer is not enough for artificial intelligence to be called intelligence . Intelligence itself requires the possession of autonomous wisdom and intellectual independence. When an intelligence is limited to artificiality, it can only be a manifestation of human intelligence . The essential requirement of artificial intelligence is to express new intelligence that transcends human intelligence on the basis of artificiality . Therefore, the algorithmic architecture and operation programs built in the phenomenon layer are merely the origin of artificial intelligence, and the content of its intelligence is still empty, requiring the algorithm to be continuously filled through its own autonomous operations, which transitions to the hidden layer . The hidden layer is where the algorithm's deep computations are deposited, giving birth to the unique wisdom of artificial intelligence through a form of non-visual computation . The hidden layer is the foundation for the existence of artificial intelligence, and algorithms thus generate intelligent expressions different from human thinking . The biggest feature of the hidden layer is its opacity . It is difficult for us to know how artificial intelligence makes decisions, and this is the meaning of artificial intelligence's existence . Its opacity brings about its independence as intelligence, which is the fundamental difference between it and human intelligence . The phenomenon layer and the hidden layer are natural presentations of the inherent characteristics of algorithms, and together they constitute the hierarchical structure of the algorithm's intelligence . To regulate algorithms, it is necessary to understand and respect the natural depth line of algorithms . Regulation is a process of bestowing legal meaning. As a form of output of human intelligence, norms must also accept the existing facts of algorithms in the hierarchical structure . The normative meaning expressed in conceptual form is difficult to enter the hidden layer of algorithms and cannot have a substantial regulatory effect on the deep layer of algorithms. Therefore, its focus should be on the phenomenon layer of algorithms . However, currently, the normative regulation of algorithms still remains at the level of intuitively and simply responding to the complexity and variability of algorithms with concepts . We make a normative judgment but do not verify its applicability to algorithms. They unknowingly become mere regulatory intentions. We assume that algorithms are so "obedient," but we have not confirmed whether algorithms operate as we imagine after running . Since it does not return to the internal technical constraints of algorithms, it will inevitably stop at the external imagination of regulation . Because an algorithm is an intelligent decision based on mathematical thinking, in order to make it "obedient," the norm, which uses human language to express textual thinking, must be "enriched" by matching and communicating with the algorithm's internal structure to effectively regulate the algorithm, which uses machine language to express digital thinking . Once regulatory rules are filled into the algorithm's internal structure, then regulating the algorithm is no longer an intentional object but a constraint that has been implemented, "creating more efficient governance technologies" within the algorithm, thereby effectively limiting the algorithm . More importantly, when the algorithm executes the regulatory rules set by the phenomenon layer, it must autonomously bring the constraints into the algorithm's hidden layer, so that the hidden layer still maintains the efficiency of regulation during operation, thereby transforming regulation from human to machine autonomy . To achieve this, algorithm regulation must achieve the digitalization of regulation, that is, rewriting the core elements of regulation in computer language, inputting them into the algorithm system in digital machine language, so that a certain norm we intend to achieve is transformed into a certain code in computer programming, giving it a certain regulated gene from the beginning of the algorithm architecture, thereby truly achieving deep regulation of algorithms . Especially in today's digital society, the computability of data determines that almost all legal activities based on it are computable, and norms can be transformed into machine language presented in data . Embedding norms into the phenomenon layer of algorithms in the form of machine language rewriting, and gradually bringing them into the hidden layer through the algorithm's operations in the phenomenon layer, exerting the binding force of regulation from within, and thus achieving substantive algorithm regulation, is the logical and scientific solution for algorithm regulation . It should be pointed out that the phenomenon layer and the hidden layer are not clearly demarcated. They jointly express the natural stratification in the process of continuous deepening of computation. There is an intersecting transitional zone between the phenomenon layer and the hidden layer, and the proportion between the two also varies depending on the algorithm . This is a way to describe the complexity of algorithms. With the help of this method, we can further explore targeted regulatory solutions based on the characteristics of different levels . It is precisely the deep understanding of the technical complexity of algorithms that makes the regulation of algorithms maintain a cautious rationality, making a regulatory strategy that integrates rigid constraints in the phenomenon layer and flexible constraints in the hidden layer possible .
IV. Layered Regulation of Algorithms
Faced with the chasm between norms and algorithms, we need to measure the distance between the two, lay out interconnected steps, and adjust the degree of normative regulation at different levels to ensure the effectiveness and rationality of algorithm regulation . This naturally leads to the significance of distinguishing and structuring algorithms and their regulation . We must divide several levels on the spatial scale between norms and algorithms, make a series of judgments on the possibility of regulation for the algorithmic situation at each level, and establish a layered regulatory framework for algorithms . Only in this way can we truly re-establish the foundation of algorithm regulation, which is the primary task of building an algorithm regulation system . The above analysis shows that algorithms have a "latitude and longitude structure" . On the one hand, the generality and specificity of algorithms, as tools for horizontal distinction of algorithms, represent different scales of algorithmic commonality and individuality . On the other hand, the phenomenon layer and hidden layer of algorithms, as vertical extensions of algorithms, represent different levels of depth and vertical space of algorithmic intelligence . These together constitute the two-dimensional structure of algorithms, and the intersection of the four elements on the two dimensions produces four combinations of algorithm positions: generality in the phenomenon layer, specificity in the phenomenon layer, generality in the hidden layer, and specificity in the hidden layer . These four algorithm positions are the permutation matrix for the expansion of the algorithm set . Correspondingly, norms exhibit different regulatory effectiveness for algorithms at different positions . As shown in Table 1:
Table 1: Four Positions of Algorithms and Their Regulatory Effectiveness
Algorithm Position |
Regulatory Effectiveness (Normative Regulation) |
Generality in the Phenomenon Layer |
Easy to understand, universally applicable, but limited effectiveness |
Specificity in the Phenomenon Layer |
Universally applicable, relatively good effectiveness, but limited by scenarios |
Generality in the Hidden Layer |
Limited effectiveness, alternative technical solutions exist |
Specificity in the Hidden Layer |
Difficult to regulate |
This indicates that regulation is not an immutable static constraint acting on algorithms, but a dynamic force that constantly extends and contracts, adjusting and changing according to different algorithm positions and the algorithmic characteristics at those positions . Thus, the regulation of algorithms can be understood as norms moving in a certain sliding scale manner within the latitude and longitude framework of algorithms, varying with the different algorithm positions reached and maintaining a regulatory continuum within the unity of algorithm distribution . An algorithm is a collective concept that reflects the collective expression of a group of strategic mechanisms with embodied computational forms . As a collective, algorithms naturally possess their unique distribution states, exhibiting specific extension structures . This means that regulating algorithms first requires recognizing their distribution pattern, entering this multi-dimensional extension pattern, and dealing with the relationship between norms and algorithms at a more refined level .
1. Principle-based Regulation of Generality in the Phenomenon Layer The domain of generality in the phenomenon layer is the most transparent part of algorithms and the easiest to regulate . This algorithm position extracts the superficial generality of algorithms, exhibiting a high degree of abstraction, making it highly suitable for normative regulation using concepts as tools . Regulation based on the generality in the phenomenon layer is actually a kind of legal principle-based regulation . This does not mean merely applying legal principles to regulate, but rather that whether it is legal principles or legal rules, at this level they can only be principle-based, because the object of regulation is the generality extracted from the knowable phenomenon layer of algorithms, which is the most common and universal part of algorithms . The more common and universal, the higher the degree of abstraction, and the easier it is to regulate with the abstract concept, but the weaker the regulatory effect . The convenience of regulation is inversely proportional to the intensity of regulation . Therefore, regulation at this algorithm position is easy to understand and universally applicable, but its effectiveness is limited . For example, the principle of "algorithmic moderation" aimed at protecting the rights of food delivery riders on online food delivery platforms requires that when evaluating food delivery personnel, the "strictest algorithm" should not be used as an evaluation requirement, and the evaluation factors such as the number of orders, on-time rate, and online rate should be reasonably determined, and the delivery time limit should be appropriately relaxed . This is a general provision targeting the phenomenon layer of algorithms. However, in specific application, the lack of specific judgment standards for how many orders and what on-time rate are considered "reasonable," and how long a time limit can be considered "relaxed," affects the realization of the effectiveness of this principle . In the field of algorithm application, principle-based regulation stably guides and coordinates the legal and compliant behavior of all participating parties . For example, Article 6 of the Personal Information Protection Law stipulates the principle of purpose limitation, which, as the "king clause" of the legal principle system for personal information protection, requires that when using algorithmic technology for data collection and processing, an attitude of humility and self-restraint should be adopted to ensure that the processing purpose is clear, the processing behavior is directly related to the purpose, and the processing result has the least impact on individuals . This provision is a general principle targeting the phenomenon layer of algorithms. In data governance practice, all participating parties need to establish a "consensus-based processing framework" under which the purpose of data processing is shared and condensed into a specific "consensus" to ensure that data, after being circulated, is fixed within a specific scope, avoiding the use of data by participating parties for other purposes "outside the consensus" . For another example, under the governance framework of network security and algorithmic speech, in principle, network security should be protected in a graded manner, among which information content that clearly harms national security and information that may endanger national security if damaged, lost function, or data leaked should be listed as the highest protection level for key protection . Applying this principle to regulate AI-generated content involves overall control of the generated content at the phenomenon layer of algorithms, requiring platforms to filter and review the information content produced by algorithms, while specific filtering and review need to be achieved through detailed rule regulation targeting the specificity of algorithms in different scenarios .
2. Rule-based Regulation of Specificity in the Phenomenon Layer The specificity of the phenomenon layer only serves as the unique part of various algorithm design architectures, that is, the different programming methods of each algorithm, or its application to specific digital scenarios or based on specific datasets, or the adoption of new machine language expressions, etc. . It represents both the diversity of existing algorithms and the innovativeness of exploring algorithms . This algorithm position is the concrete expression of knowable algorithms, which are still transparent, but their transparency needs to be captured in specific scenarios . Therefore, regulation at this algorithm position requires norms to take another step forward from the principle level and enter into the algorithm to establish norms within it . Since this kind of norm must be matched with the concrete algorithm in a specific scenario, it is not a legal principle norm but must be a legal rule norm . That is to say, the rule may not have the universality applicable to other algorithms, but it can well constrain "this particular" algorithm . This means that at this algorithm position, norms and algorithms are deeply integrated, and this integration has exclusive characteristics . This is the only way to refine algorithm regulation, and the path to regulating algorithms needs to be developed in this space . Specifically, rule-based regulation at this level needs to be placed in specific algorithm application scenarios, because the nature of algorithms will be different in different scenarios . For example, under the principle of purpose limitation for personal data processing mentioned above, there are many rules targeting specific scenarios, among which the "consensus-based processing framework" rule for privacy-preserving computation includes sub-rules such as the computation participants jointly setting clear computation goals, jointly agreeing on computation logic, the processing of original data by privacy-preserving computation being specially "customized" to achieve the computation goals, and accepting constraints of specific technical solutions . These rules are specific expansions of the purpose limitation principle in the privacy-preserving computation scenario, and they are detailed provisions for the three overall requirements in the principle: "clear processing purpose, processing behavior directly related to the purpose, and processing result having the least impact on individuals," which need to be implemented in the algorithm phenomenon layer of various specific application scenarios . For another example, the principle of hierarchical protection of network security mentioned above is transformed into the following rules in the governance practice of false and harmful information in generative artificial intelligence: first, establish unified information content review and filtering standards covering model building, training, and operation; second, list information content that clearly harms national and social security interests as "highest sensitivity level," and mark and eliminate it from the database in the early stage of training; third, establish a regular inspection mechanism in the data planning, data prompting, and data fine-tuning stages to clean and filter relevant content in a timely manner . These rules require segmented processing of the specificity of the algorithm phenomenon layer, reflecting the certainty and specificity of rule regulation .
3. Technical Regulation of Generality in the Hidden Layer The generality of the hidden layer refers to the general characteristics generated by algorithms in deep computation, which is a representation of the commonality that emerges after different algorithms perform their respective hidden computations, and it is the boundary where human intelligence explores artificial intelligence . At this algorithm position, although the source and reasons for the characteristics of each algorithm cannot be fully known, through the induction and summarization of generality, we can still perceive the trends and paths of algorithm computation . These emerging generalities become certain clues for human intelligence to understand artificial intelligence and also serve as a reminder of which aspects of algorithms may need to be regulated in the future . Therefore, it can only be a field awaiting normative regulation. Although it indicates aspects that may need to be regulated in the future, it is still uncertain how to regulate them . That is to say, it expresses the potential of normative regulation, possessing a certain potential to rise to the phenomenon layer, requiring the continuous accumulation of various algorithmic commonalities to increase its rate of ascent. When it accumulates sufficiently maturely, it will enter the phenomenon layer and become an object of clear normative action . With the progress of new technologies and the strengthening of human cognitive control over algorithms, the generality of the hidden layer will be further developed, gradually climbing from an implicit state to the phenomenon layer . Therefore, it is a transitional channel for the leap of algorithm levels . Given that it is always in a developmental position, the regulatory power and effect of norms on it are quite limited, and it is necessary to find other alternative regulatory methods . Classical architectural theory regards code as law, exploring the possibility of code replacing law in playing a normative role in cyberspace. Today, if code is replaced by algorithms, it is logically tenable, that is, algorithms, as a technology, can also play a role in regulating algorithms . In fact, we do not need to analogize technology to law, nor do we need to regard rules as algorithms . Technology itself has the function of regulation, and it can affect algorithms without formal transformation . Especially in the hidden layer of algorithms, technology checking and balancing technology will become an important way for humans to indirectly control algorithms . We cannot regulate algorithms by figuring out the deep logic of algorithm operations and drafting new laws for them . At this time, legal norms alone cannot fully maintain the order of the algorithmic world. Therefore, the path of algorithm regulation should shift from normative regulation relying on principles and rules to technical regulation, thereby establishing a technology-based order under a rule-based order . Taking privacy-preserving computation technology as an example, under the principle of purpose limitation for data processing, suppose in a financial risk monitoring scenario, we clarify the computational purpose (to establish a financial risk scoring model for Company A to achieve dynamic monitoring of its post-loan risk) and computational logic (the functional relationship between three types of data of Company A—loan data, business data, and real estate mortgage data in a certain financial government database—and Company A's post-loan risk score) of joint modeling by Company A and Bank B through the "consensus-based processing framework" rule . Since multiple parties are involved, in order to ensure that the data held by each participating party is not leaked or directly exchanged, it is necessary to use federated learning technology to delve into the hidden layer of the algorithm, limiting the data interaction of all parties to the model gradient data level that locks a specific processing purpose, preventing the data from being used in other scenarios, and completing the training of the data model on local devices . At this time, privacy-preserving computation, as a regulatory method, maximizes the prevention of data abuse, achieves the regulatory goal intended by the principle of purpose limitation for data processing, meets the regulatory requirements of the "consensus-based processing framework" rule, and completes the regulation of algorithms by algorithms . This is a regulatory effect that cannot be achieved by relying solely on legal norms .
4. Unregulatable Specificity in the Hidden Layer The specificity of the hidden layer refers to the self-intelligent expression of algorithms in deep computation, which is the most essential landmark area of artificial intelligence . The intelligence of artificial intelligence is achieved through the algorithm's own internal computation. Without this algorithm position, artificial intelligence would no longer be artificial intelligence, at most it could be called manually operated intelligence . The difference between one intelligence and another lies in its ability to independently complete tasks without relying on other intelligence . This algorithm position is the key area that enables artificial intelligence to exist and become valuable . Of course, this will bring fear to humans and a strong desire for regulation driven by fear . However, this is an unregulatable territory . The prerequisite for regulation is a clear object, and only a clear object can enable the exertion of norms to achieve an effective point of action . However, in the hidden layer of complex deep computation, it is difficult to determine what exactly needs to be regulated . Therefore, this algorithm position is excluded from regulation . Here, we must acknowledge the limits of regulating algorithms. Regulation cannot be infinitely extended. When it encounters the essence of algorithms, the highly complex and inherent computational hardness contained in the algorithms themselves sets a barrier for regulation . To some extent, this is a place where humans are at a loss and will inevitably be an unregulatable "wasteland" . For example, the operation process of the dialogue model of generative artificial intelligence within the technical system is in a "black box" state. Although explainable artificial intelligence has developed rapidly in recent years, producing algorithms for explaining algorithms such as "counterfactual explanation" and "user-centric transparency," providing basic methods for opening up some parts of the "algorithm black box," there is currently no complete technical solution that can achieve a comprehensive explanation of generative artificial intelligence algorithms . Acknowledging that there are unreachable blank areas for both regulation based on legal norms and regulation based on algorithmic technology does not mean compromise and defeat of regulation in the face of algorithms, but rather inspires us to change our regulatory concepts and make corresponding adjustments to the direction and focus of regulation .
V. The Shift in Algorithm Regulation: From Legality to Trustworthiness
To better accomplish the task of layered algorithm regulation, we need to return to the epistemological origin of algorithm regulation, re-examine the legal philosophical view of algorithm regulation, adjust the legal theoretical stance for evaluating the relationship between norms and algorithms, and reshape a more objective and rational legal theory of artificial intelligence . It should be pointed out that traditional legal concepts have always followed a potential stance of "normativism." Once the concept of "norm" is used, people often consciously or unconsciously endow it with a kind of invincible power . Whether it is the moral condemnation power inherent in social norms themselves, or the state coercive power conferred upon them, "norms" are elevated to the perspective of governing behavior, establishing a power imbalance with the regulated objects, and radiating from top to bottom to the regulated objects with the advantage of power, judging their legality at any time . This may be the potential disciplinary logic of legal scholars, carrying an implicit disciplinary gene . Norms are doctrines that can guide, warn, punish, or reward behavior, and they have been repeatedly proven effective when applied to human behavior in real life . In the process of applying norms, all behaviors are attempted to be placed within the legal concept pyramid of legality judgment . But regrettably, algorithms are not human behavior . In the field of algorithms, regulation has lost its dominant position, and legality judgment is no longer necessary, and may even be powerless . If an autonomous vehicle hits a person, what exactly are we regulating? If a car driven by a person hits someone, what is regulated is the person's driving behavior, and what is judged is the legality of the person's driving behavior, not the car . Here, this driving behavior is regulated as a whole, only distinguishing between subjective (intentional or negligent) and objective (driving behavior), emphasizing the causal relationship between the behavior and the harmful result (hitting someone) . We do not need to further subdivide the objective driving behavior, whether the driver stepped on the accelerator instead of the brake, or turned the steering wheel incorrectly, or other reasons . Because these behaviors can all be categorized under the concept of "traffic violation," and only through this explanatory method with inductive and generalizing nature can the facts be subsumed under the relevant regulatory norms of "traffic violation" . Regardless of the type of erroneous driving behavior, it can be summarized and remain at the conceptual level of judging the legality of human behavior, without needing to converge upwards or refine downwards, which is sufficient to support the normative regulation of the behavior of a car hitting someone . Because punishment acts on the actor. Whether it is a monetary fine or a restriction on personal freedom, the actor (driver) is a natural person entity who becomes a complete, independent normative object with clear entity boundaries . At the same time, the punishment of the actor (driver) can be empathized with and understood in its regulatory significance by the victim (the person hit), because the victim is also an individual person . Therefore, the normative regulatory effect on human behavior is interconnected, universal, and corresponding between people. Norms based on behavior can become a key evaluation for judging the legality of human behavior, constituting one of the sources of human social order .
However, this is difficult to achieve with autonomous vehicles . The determination of liability for traffic accidents caused by autonomous vehicles has the problem of uncertainty in the regulatory object . One of the basic requirements for liability determination is subjective fault, and whether the driving program operated by an algorithm has subjective fault may be a thorny issue . If there is no basis for subjective fault, then the traditional regulatory logic of liability determination will be broken, which requires considering whether a new regulatory system for liability determination based on algorithms should be established . If there is subjective fault, then the determination of this subjective fault should be whether the algorithm is regarded as an anthropomorphized "driver" and held responsible for the algorithm's automatic decision-making itself, or whether it should be attributed to the algorithm designer? Regardless of which option is chosen, there are deeper problems . In the former case, attributing responsibility to the algorithm itself will not produce a punitive effect . Algorithms do not have their own interests, and without the deprivation of personal interests, the effect of punishment is lost . The only way to deal with algorithms is to disable them, and even so, it is difficult to interpret this as punishment for the algorithm . In the latter case, attributing responsibility to the algorithm and holding the algorithm designer accountable are two different meanings of determination . Attributing responsibility to the algorithm is based on the determination of fault in the traffic driving accident itself, while holding the algorithm designer accountable is based on whether they have fulfilled sufficient due diligence in designing the algorithm . Sufficient due diligence mainly refers to whether sufficient prediction and prevention have been made for the risks of various traffic application scenarios that may occur in the future . However, this does not mean requiring the achievement of prevention without omission for the risks of all application scenarios . Application scenarios are diverse and open, possible risks are inexhaustible, and the part that algorithm designers can "teach" autonomous vehicles is ultimately limited . Therefore, we can and should certainly pursue the fault and responsibility of algorithm designers, but this is not true regulation in the real sense; it merely finds a related object, replacing the attribution of responsibility to the algorithm with holding the algorithm designer accountable . The above analysis shows that the legality approach of traditional legal normative regulation is difficult to effectively regulate algorithms . This forces us to reflect on the feasibility and possibility of the consistent stance of "normativism" and "legal centralism" in the field of algorithm regulation . But merely realizing the problem is not enough; we also need to ask: what is our purpose in regulating algorithms? What kind of effect do we want to achieve? We need to re-examine this preconceived notion, critically recognize the effectiveness of single legality judgment as traditional normative control, and layered regulation is precisely to make up for this deficiency . Once we abandon normativism and acknowledge the limitations of norms (whether legal principles or legal rules) in regulating the hidden layer of algorithms (whether targeting the generality or the specificity of algorithms), and establish an overall cognitive framework of layered regulation, we will find that legality judgment based on subsuming facts with norms is no longer the only path. We need to turn to finding a regulatory logic that is more explanatory for this framework .
Taking the regulation of algorithm transparency as an example, establishing a layered regulatory framework means: first, at the regulatory level of generality in the algorithm's phenomenon layer, with the establishment of the right to explanation of algorithms as the basic principle, one is to require personal information processors to "ensure the transparency of decisions when using personal information for automated decision-making," and to "follow the principles of openness and transparency when processing personal information, disclose rules for processing personal information, and clearly indicate the purpose, methods, and scope of processing"; second, it is stipulated that when algorithmic automated decision-making may have a significant impact on personal rights and interests, the data subject has the right to request the decision-maker to provide explanations on relevant circumstances . Although the above-mentioned right to explanation of algorithms has made general provisions requiring algorithm transparency at the phenomenon layer, due to the uncertainty of concepts in the principles and the broad scope of application, it is still necessary to specifically implement them in the form of rules in different scenarios. Thus, regulation reaches the level of specificity in the algorithm's phenomenon layer . At this level, rules adopt different regulatory solutions for the specificity of algorithms . On the one hand, corresponding regulations are made for algorithm transparency in different scenarios, such as the Guidelines on Information Disclosure for Financial Applications of Artificial Intelligence Algorithms, which consider various factors such as data, computing power, and scenarios involved in financial algorithm applications, and make detailed provisions on the conditions, methods, dimensions, and content of information disclosure to improve the explainability and transparency of artificial intelligence algorithms in the financial field; the Guiding Opinions on Implementing the Responsibilities of Online Food Delivery Platforms and Effectively Safeguarding the Rights and Interests of Food Delivery Riders require platforms to disclose to delivery riders algorithm rules directly related to the basic rights and interests of workers, such as order allocation, working hours, and rest, to ensure that the rights and interests of delivery riders are not silently eroded by algorithms . On the other hand, legal rules also subdivide the different transparency of algorithms in the same scenario, such as the Regulations on the Management of Algorithm Recommendations, which require algorithm recommendation service providers to "optimize the transparency and explainability of rules such as retrieval, ranking, selection, pushing, and display" to avoid adverse effects on users; they should "clearly" inform users of the situation of their provision of algorithm recommendation services and publicize the purpose, intention, and main operating mechanisms of algorithm recommendation services in an "appropriate manner" . When regulation sinks to the level of generality in the algorithm's hidden layer, regulation based on norms no longer works, and regulation based on technology begins to play a role . For example, hidden layer analysis algorithms use visualization technology to analyze the local features of the latent layer in neural network models, achieving the explanation of the neural network prediction process . This method belongs to the explanation of the prediction results and decision-making process of black box models . It acts on the hidden layer of algorithms, aiming to explain algorithms with algorithms in response to the universally existing problem of algorithm black boxes . For another example, empirical research shows that visualization heatmaps directly generated by the system have a better effect on promoting the understanding of decision-making subjects through algorithm explanation than textual explanations . Finally, at the level of specificity in the algorithm's hidden layer, that is, the area where neither law nor technology can reach, we no longer pursue the transparency of algorithms but acknowledge the inexplicability of algorithms at this level, and then shift regulation to the trustworthiness of algorithms: in the unregulatable area where transparency and explanation are impossible, whether the algorithm is knowable, visible, and understandable is no longer important to us; what is important is whether the algorithm is trustworthy to us . Trust can better promote understanding and interaction among all parties than transparency . Reconstructing the right to explanation of algorithms based on the principle of trust is particularly important in the practice of regulating algorithm transparency, which requires us to construct a relationship based on trust rather than consent . The pursuit of trustworthiness means that the regulation of algorithm transparency is no longer centered on legality but aims to establish a mechanism of mutual trust that promotes "human-machine collaboration" and the participation of all parties involved in algorithms, and to promote the application of trustworthy algorithms under this mechanism . The prerequisite for achieving this goal is to eliminate individuals' concerns and precautions regarding algorithm systems. For this reason, the regulatory path no longer completely relies on transparency and explanation, because complete transparency may lead to information redundancy and cause perceptual numbness; and unlimited explanation, for ordinary people who cannot understand the operating principles of algorithms, will instead reduce their trust in algorithm systems . The regulatory path of algorithm transparency shifts to relying on the establishment of communication and trust mechanisms. As shown in Table 2:
Table 2: Layered Regulation of Algorithm Transparency
Algorithm Position |
Transparency Regulation Strategy |
Generality in the Phenomenon Layer |
Establish the right to explanation of algorithms, universal information disclosure, specific algorithm explanation |
Specificity in the Phenomenon Layer |
Refine transparency requirements for specific scenarios, differentiated transparency regulation |
Generality in the Hidden Layer |
Use technical means for algorithm explanation (such as visualization of hidden layer analysis) |
Specificity in the Hidden Layer |
Shift to algorithmic trustworthiness, build trust communication mechanisms |
Norms, for behavior, possess a kind of instantaneous regulatory self-sufficiency Regardless of the type of human behavior, no matter how the circumstances of this behavior change, they can be explained and then subsumed under certain concepts Algorithms, however, easily "escape" from legal concepts and legal norms, and are difficult to be completely "captured" by traditional regulatory methods This requires us to eliminate the belief that legal norms can naturally apply to algorithms and begin to reflect on the feasibility and possibility of the consistent stance of "normativism" and "legal centralism" in the field of algorithm regulation Algorithm-centered trustworthy artificial intelligence applications provide some useful insights for the norm-centered legality regulation path We must realize that the path of normative regulation is difficult to follow in the hidden layer of algorithms, and the overall effectiveness of traditional regulation will also be greatly reduced in the face of algorithms Therefore, norms, due to the specificity of their own composition method—language and writing—have to stop at the boundary of their own limitations. Their further advancement can only be to transform the form of codes, switch to digital mode, thereby entering the internal structure of algorithms and using the idea and technology of regulating algorithms with algorithms to coordinate and cooperate to achieve the internal governance of complex algorithms This is an extremely difficult step for jurisprudence, which means we need to break away from the traditional jurisdiction of norms, step out of the scope of conceptual subsumption, and engage in a game with new cognitive frameworks and algorithms unfamiliar to academia This is a theoretical iteration of theoretical jurisprudence facing a constantly evolving artificial intelligence technology Although this path is arduous, it is also the only way to the governance of trustworthy artificial intelligence Only by understanding the jurisprudential significance and structural logic of layered algorithm regulation, and realizing the regulatory transformation from legality to trustworthiness, can human intelligence continue to maintain its existing advantages when facing artificial intelligence, and the rule of law will still be the rule of law, rather than becoming the rule of algorithms