Location : Home > Resource > Paper > Theoretical Deduction
Resource
ZHAO Hong | Normative Analysis and Entity Boundary of Algorithm Technology for Public Decision-Making
2024-02-23 [author] ZHAO Hong preview:

[author]ZHAO Hong

[content]

Normative Analysis and Entity Boundary of Algorithm Technology for Public Decision-Making



*Written by Zhao Hong

Professor, School of Law, China University of Political Science and Law



Abstract: The application of algorithms to decision-making by public institutions has been found to significantly enhance the national governance capacity. However, it may bring new challeges to individual rights protection. The current algorithmic regulations mostly emphasize the control of due process, and do not provide clear-cut boundaries for the application of algorithmic technology in public decision-making. Furthermore, the empowerment model put forth by the Personal Information Protection Law of China is mainly focused on constructing scenarios for private institutions to apply algorithms, leading to normative dislocation and lax enforcement when public authorities are involved. While there is no global consensus on the entity boundaries of algorithm technology that should be applied to public decision-making, the traditional legal reservation principle can still serve as the basic framework for contemplating this issue. The principle adopts a "principle forbids exception allows" relationship mode, when algorithms are applied in public decision-making, requiring legislator to balance between government efficiency and rights protection. Additionally, various factors such as basic rights protection, risk controllability, value judgments, discretionary power, algorithms types and the data involved should be taken into consideration before authorizing exceptions under the law. Finally, an effective algorithmic impact assessment system, can serve as a preventive means to delimit the boundaries of decision-making, thereby strengthening the democracy and accountability of the algorithm in public decision-making from the outset.

Keywords: algorithmic decision; freedom from automatic decision-making control; principle of legal reservation; algorithm impact assessment


1 Introduction


In the digital age, algorithms are changing the fabric of human decision-making in their own unique ways. Algorithms were initially used to make decisions in private life and business, such as news feeds, mobile navigation, merchandising, and healthcare. Thanks to its speed, accuracy, and ability to handle complex problems, algorithmic technology has moved from private life to public decision-making. So far, the application of algorithmic decision-making by public institutions in China has widely covered many fields such as administrative approval, traffic law enforcement, credit assessment, tax inspection, and risk prevention and control. Entrusting decision-making power to algorithms does not only mean a simple change or iteration of decision-making methods, but also the typical consequences of this are the power-of-algorithmization of algorithms caused by the algorithmization of decision-making, that is, algorithms have become a public force that has a general and lasting impact on individual lives, and when algorithmic technology is combined with state power, it is no longer even just a "quasi-state power". The algorithmization of public decision-making has a significant empowerment for national governance, which is not only reflected in the timely, accurate and efficient collection, processing and feedback of governance information, but also manifested in the administrative system originally driven by legal rules, administrative orders and public servants, and gradually transformed into an algorithmic administrative system of "automatic analysis, automatic decision-making and automatic execution" driven by software code and statistical operations.

However, the combination of algorithms and public decision-making has complex multi-effect, which brings about the upgrading of administrative decision-making, supervision and law enforcement capabilities, and inevitably leads to the problem of excessive erosion of individual rights. In many scenarios, due to the lack of vigilance and the lag of legal regulation, the combination of algorithms and public power has even given birth to an unfettered hegemony, which not only breaks the balance between power and rights constructed by the traditional rule of law through power constraints and rights protection, but also exacerbates the power gap between the state and individuals, and also re-triggers the crisis of individual rights protection. After the algorithm entered public decision-making, many scholars paid attention to its complex externalities, suggesting that we should be vigilant and guard against the algorithmization of public decision-making, and emphasized the use of law to tame algorithms. But compared to algorithms that have been unhindered in the public sphere, these ideas are largely conceptual. In particular, many of the existing discussions have led the regulation of algorithmic power to algorithmic interpretation and due process, arguing that as long as the problems of algorithmic comprehensibility and program control are solved, the problems of algorithmic bias, algorithmic discrimination, and inability to pursue responsibility caused by algorithmic black boxes can be effectively eliminated. These discussions are certainly useful for taming algorithms, but they seem to ignore an important substantive legal premise, namely, whether algorithms can be applied indiscriminately in all public decision-making. Is it necessary for public decision-making to set thresholds for the application of algorithms? Since the combination of state power and algorithmic technology will form a more powerful domination and suppression of individuals, this also means that if we allow algorithms to enter all decision-making fields unguardedly, it is impossible to avoid individuals being enslaved and suppressed by algorithmic hegemony only by relying on algorithmic explanations during the matter or accountability mechanisms after the fact. Based on the above considerations, this paper attempts to explore the scope and substantive boundaries of public decision-making that can fully apply algorithms, and aims to set strict conditions for the initiation of algorithmic decision-making from the source.


2 Typical problems in the application of algorithms in public decision-making


The combination of state power and data power has been described by many scholars as "rule by data". Digital governance not only improves the efficiency of administrative supervision, but also brings about the upgrading of national governance capabilities. This empowerment effect also depends on the great advantages of algorithms in deep learning, structural embedding in social power operation systems, pervasive intervention in social life, and universal domination of human behavior. However, because of the above advantages, the combination of state power and algorithmic technology can easily give rise to hegemony that is almost unconstrained, and the state will also use algorithmic empowerment to form a more powerful domination and suppression of individuals.


2.1 Algorithmic manipulation and loss of individual subjectivity

With the blessing of algorithms, the scope of state power is no longer limited by space and time in the physical world, and it can cover everyone in social life in a very short time through the interconnection of information, and cover every position and moment of their life. This is also more likely to lead to mass and large-scale rights violations, rather than traditional single and one-way violations.

Existing studies have summarized the impact of algorithms on individual rights into algorithmic discrimination and algorithmic manipulation, and believe that they have challenged and undermined individuals' privacy rights, equality rights, and human dignity. Behind the infringement of the right to privacy and equality is the manipulation of individuals by state power with the help of algorithms. Relying on the widespread collection of personal information, algorithms make it easier for governments to accurately monitor and track individuals; Digital technology makes individuals digitized and digitized, and they are more easily controlled and controlled by the state; The algorithmic black box drags the decision-making power into the opaque area constructed by technical complexity, thus further swallowing up the space for the exercise of individual rights, and individuals affected by algorithmic decision-making not only lose the opportunity to participate, question or even oppose decision-making, but also lose the guarantee of due process. The absence of a responsible person in automated decision-making also makes it impossible for individuals to counter through traditional accountability or redress mechanisms. As a result, after the application of algorithms, individuals not only face the risk of infringement such as privacy exposure, differential treatment, and derogation of rights, but also deprivation and degradation of their subjectivity as human beings.


2.2 Algorithmic decision-making makes a breakthrough in the traditional rule of law

The main ways used by the traditional rule of law to restrain public power are authority control, procedural control and consequence control, that is, legal control before, during and after the event.

Authority control mainly relies on the "principle of administration in accordance with law", which consists of legal precedence and legal reservation. The law reserves the right to refer matters relating to the fundamental rights of individuals to elected bodies, thus avoiding the delegation of individual rights to the object of administrative arbitrariness. The priority of the law also requires that administrative activities must be based on the law, and the constituent elements and consequences must be selected in accordance with the instructions of the law, and cannot be deviated from without authorization. However, after the introduction of algorithmic decision-making, due to the inherent understanding that algorithms are just changing tools, they often do not take any precautions and do not set any thresholds for their access, which makes many matters related to individual rights easily handed over to algorithms for decision-making without prior assessment and democratic resolution.

The mechanism for the traditional rule of law to limit public power lies in procedural control. However, the principle of due process of law, which is enshrined in the traditional rule of law, has also been overturned by the application of algorithms. Algorithmic decision-making compresses all aspects of administrative activities, blends all information and content into the established algorithm to obtain corresponding results, and the results are also completed instantaneously in the system, at this time, it is no longer possible to separate the procedures, steps and methods of administrative activities, and the regulation of due legal procedures for administrative phases and steps is also suspended. Corresponding to the suspension of procedural requirements is the reduction of the parties' right to participate in the proceedings. Algorithmic decision-making is often considered to be "entpersönalisiert", which not only eliminates the possibility of communication between the administration and people, but also leads to difficulties in understanding and implementation due to opacity, and ultimately reduces the social trust of public administration.

There is also the right to redress and pursue responsibility after the fact. Due to the lack of transparency and the lack of clear responsible persons, the parties' right to relief cannot be fully protected when there are unfair results such as bias and discrimination in the algorithm. Moreover, it is doubtful that the Personal Information Protection Law of the People's Republic of China (hereinafter referred to as the "Personal Information Protection Law") only stipulates the internal liability of state organs for failing to perform their personal information protection obligations in Article 68, paragraph 1: "If a state organ fails to perform its personal information protection obligations as provided for in this Law, its superior organ or the department performing the personal information protection duties shall order it to make corrections; The directly responsible managers and other directly responsible personnel are to be given sanctions in accordance with law. As a result, although appeals for reconsideration and litigation against algorithmic decision-making can be theoretically and logically proven, they cannot find a practical normative basis.


3 Issues concerning the norms and application of algorithmic decision-making in the Personal Information Protection Law


The PIPL only has provisions on algorithmic decision-making in Articles 24, 55 and 73. Article 73 clarifies that "automated decision-making" means "the activity of automatically analyzing and evaluating an individual's behavioral habits, interests and hobbies, or economic, health, and credit status through computer programs, and making decisions". Article 55 requires that when "personal information is used for automated decision-making", the personal information processor shall conduct a personal information protection impact assessment in advance and record the processing. Article 24, as the core norm for automated decision-making, has three paragraphs, of which two can be directly applied to public institutions: first, personal information processors using personal information to conduct automated decision-making shall ensure the transparency of the decision-making and the fairness and impartiality of the results; Second, individuals have the right to request explanations from personal information processors when decisions that have a significant impact on the rights and interests of individuals are made through automated decision-making, and have the right to refuse personal information processors to make decisions solely through automated decision-making. These two provisions set the obligation to "explain" and "ensure the transparency and fairness and impartiality of the results of algorithmic decision-making" for the application of algorithmic decision-making by public institutions, and similarly give individuals the right to request explanations and reject decisions made solely through automated decision-making. However, the extent to which the above provisions constitute a constraint on the application of algorithms to public institutions in decision-making, and what are the deficiencies of such constraints, still need to be carefully analyzed in the provisions.


3.1 Freedom from automated decision-making: rights or prohibition debates

From the perspective of the structure of the provisions, Article 24 of the PIPL is a reference to Article 22 of the European Union's General Data Protection Regulation (hereinafter referred to as "GDPR"). However, there has been controversy as to whether the GDPR's position on automated decision-making should be understood as a general prohibition on automated decision-making or an empowerment of individuals to be exempt from automated decision-making. This also forms the background for the analysis of Article 24 of the PIPL.

The European Union's Personal Data Protection Directive (DPD), the predecessor of the GDPR, provides for the right of data subjects to be free from fully automated decision-making based solely on profiling, due to its emphasis on human subjectivity and to avoid the alienation of individuals in an algorithmic society. The GDPR follows and refines the normative model of the DPD: "(1) the right of the data subject to object to a decision based solely on automated decision-making which produces legal effects or similarly significantly affects him/her; (2) is not subject to the above-mentioned provisions based on the explicit consent of the data subject, legal authorization and necessary for the performance of the contract; (3) the limitation exemption shall preserve the right of the data subject to manual intervention and to raise challenges; (4) The three exemptions in paragraph 2 shall not apply to the processing of sensitive data, unless the explicit consent of the data subject has been obtained or appropriate measures have been taken to safeguard the rights of the data subject in the public interest. ”

From the wording of the provisions, Article 22 of the GDPR gives the information subject the right to object to algorithmic automated decision-making, and the information subject can take the initiative to intervene against algorithmic bias or injustice afterwards and refuse to recognize its validity. However, when the EU Article 29 Working Group issued the explanatory document, it considered that Article 22 was a prohibition of automated decision-making, that is, full automated decision-making and identification and analysis that had a legal or near-significant effect were in principle prohibitive. This has also opened up a long-standing debate about whether this provision is a ban or a right.

Whether it is a "ban" or a "right" can have very different effects in law. If it is understood as empowerment, it means that the information processor can still make decisions on personal information by means of automated decision-making until the data subject refuses; The prohibition would be interpreted as such that automated decision-making by the data processor would not be permitted in the first place, unless it meets the exemptions set out in the GDPR. The argument for the prohibition is that the right to be free from the constraints of automated decision-making cannot fundamentally protect individuals from the erosion and influence of automated decision-making, because the algorithmic black box makes the automated decision-making behavior extremely hidden, and it is difficult for individuals to detect whether their data is processed and by whom and in what way, and the right to object after the fact is not only difficult to exercise, but also generally refers to the automated decision-making itself, and cannot touch the final processing result. Germany's Federal Data Act and the UK's Data Protection Act (Draft) both adopt the ban model for these reasons. However, proponents of the rights theory believe that automated decision-making can bring objective benefits to society as a whole, and although the use of the ban model can protect data subjects from algorithmic manipulation, it will block technological innovation, and also affect the benign development and reasonable application of algorithmic technology. Moreover, in the context of the substantial entry of automated decision-making into all areas of social services, the adoption of a general ban also imposes high compliance costs for enterprises.


3.2 The provisions and issues of Article 24 of the Personal Information Protection Law

Judging from the provisions of Article 24 of the Personal Information Protection Law, China has chosen a relatively neutral position on the issue of handling automated decision-making, that is, it does not generally prohibit the application of automated decision-making in the private and public spheres, but gives individuals the right to "request an explanation from the personal information processor and the right to refuse the personal information processor to make a decision only through automated decision-making". should ensure transparency in decision-making and fairness and impartiality in the results".

"The right to object to a decision made by a personal information processor solely through automated decision-making" is directly aligned with "the right of a data subject to object to a decision made solely on the basis of automated decision-making that produces legal effects or similarly significantly affects him/her" in the GDPR. In the provisions of the GDPR, this right not only gives the data subject the right to decide whether to accept the decision or not to be bound accordingly at the post-event stage, but also contains a rich supporting system of rights such as the right to object, the right to know, the right to manual intervention, and the right to expression. Its essence is to give data subjects the right of general opposition after the fact, so that when individuals fail to obtain early warning in advance and launch a blockade in time, they can still mitigate and reduce algorithm risks through procedural remedies after the fact.

The reason why China does not adopt the ban model but adopts the rights model is, of course, to seek a balance between the interests of data use and individual subjectivity. However, judging from the provisions, the following requirements must be met when exercising the right to be exempt from automated decision-making:

First, decisions must be made solely through automated decision-making. The requirement that decisions must be made solely by means of automated decision-making is consistent with Article 22 of the GDPR, i.e. the right to be exempt from automated decision-making applies only to types where the automated decision-making process is carried out entirely by the system, without any human intervention, and the outcome of the decision is not influenced by any human factors. However, the use of this premise as a condition for exercising rights is also controversial in the EU. "In practice, the human-machine loop feedback form in which algorithms interact with human judgment is quite common, and the automated decision-making with human participation is not necessarily better than pure machine decision-making in terms of stability and accuracy." Therefore, this provision can be said to unduly limit the types of automated decision-making that are subject to the provision. In addition, such a requirement would induce algorithm controllers to resort to formal manual intervention or forge traces of manual intervention in order to circumvent the application of the provision.

Second, it must be a decision that has a significant impact on the rights and interests of the individual. Article 22(1) of the GDPR describes "having a significant impact on the rights and interests of an individual" as "having a legal or similarly significant impact on the data subject". Since it is based on the GDPR, the interpretation of "having a significant impact on the rights and interests of individuals" can also be used as a reference. The Guidelines on Automated Individual Decision-Making and Identification Analysis of Individuals in the GDPR published by the EU Article 29 Working Group define "legal effects" as defined as the impact of processing activities on an individual's legal rights, contractual rights, and legal status. "Significant impact" means "having a long-term or permanent impact on the data subject, or in the most extreme cases, resulting in exclusion or discrimination against an individual". "Similar significant impact" is a substantial expansion of the criteria for determining impact. However, none of these interpretations meet the requirements of precision in the application of the provisions, and the conceptual implications remain vague.

Third, if we compare the previous Personal Information Protection Law (Draft), we will find that this article actually implies another premise for exercising rights. The first draft of the Personal Information Protection Law of the People's Republic of China (Draft) stipulates that "where an individual believes that automated decision-making has a significant impact on his or her rights and interests, he or she has the right to request an explanation from the personal information processor and has the right to refuse the personal information processor to make a decision solely through automated decision-making", but "the individual thinks" was later deemed to be "too subjective and will increase the burden on enterprises", so it was amended in the official draft to read: "A decision that has a significant impact on the individual's rights and interests is made through automated decision-making, Individuals have the right to request explanations from personal information processors and to refuse personal information processors to make decisions solely through automated decision-making. Such a change would also mean that individuals would have to bear the burden of proving that automated decision-making has a significant impact on their interests.

The relatively strict exercise restrictions have limited the constraint effect of the weighting model on algorithmic decision-making. In addition to the restriction on the requirements for exercising rights, there are still many ambiguities in this provision of the PIPL: first, the right to be exempt from the constraints of automated decision-making is matched by the right of "individuals have the right to request explanations from personal information processors". However, from the wording of the provisions, the premise of "request for explanation" is the same as the right to be exempt from the constraints of automated decision-making, which means that the data processor has "made a decision that has a significant impact on the rights and interests of individuals through automated decision-making", which also means that the required explanation here is not the data subject's prior right to know and the right to refuse, and cannot ensure that the data subject obtains the necessary information, establishes reasonable expectations, or even refuses to make decisions before the automated decision-making is controlled; Second, although this article provides for an individual's right to be exempt from automated decision-making, it is also unclear whether the individual has the right to refuse only "the personal information processor makes a decision solely through automated decision-making" or "the result of automated decision-making". If the right to object is directed only at "automated decision-making" and not against "the result of automated decision-making", then once the result has been made, the right to object ex post facto does not actually help to eliminate the consequences of automated decision-making.


4 the boundaries of the applicable algorithms for public decision-making and the reasons behind them


From the above analysis, it can be seen that because of the empowerment model, the application of algorithm technology for public decision-making in China is not substantially limited at the source. Algorithms can still be applied freely and unhindered in public decision-making, and the only possible obstacle is the right of the information subject to be exempt from the constraints of automated decision-making after the fact. However, the right to be free from the constraints of automated decision-making is essentially only the control of the post-event stage, and cannot cover the risk warning in advance and the effective blocking during the event. Moreover, the exercise of this right is not only subject to regulatory constraints, but also constrained by the practical factors that the data subject may be negligent in exercising the right or weak. From this point of view, although the PIPL has a certain normative basis for algorithmic decision-making, these norms are not only weak in quality and efficiency, but also imply a major gap in the lack of substantive boundaries for the application of algorithms in public decision-making. There are two main reasons for this omission: first, the impact of the conventional calculation legal path; The second is the issue of the "integrated adjustment" model of the Personal Information Protection Law.


4.1 Deficiencies in procedural control and subject empowerment

The general approaches to algorithmic law regulation mainly include algorithmic disclosure, personal data empowerment, and anti-algorithmic discrimination. Algorithm disclosure is considered to be the core way to combat the black box of algorithms, and its goal is to break the opacity of algorithms and the resulting bias, discrimination and manipulation through algorithmic interpretation. Personal data empowerment is to expand the system and types of algorithmic rights of individuals, so that individuals can win back control in the face of algorithmic technology, without causing their subjectivity to be cannibalized and swallowed up due to the alienation of algorithmic power. Anti-algorithmic discrimination is to achieve identity-neutral algorithmic decision-making by trying to eliminate the identity discrimination implicit in the algorithm.

However, it is questionable whether the above regulatory approach can solve the problem of the applicability of algorithmic technology to public decision-making. The key to this is that algorithmic disclosure, anti-algorithmic discrimination, and even individual data empowerment are all embedded in the due process for algorithms. For example, algorithmic disclosure and anti-algorithmic discrimination are mainly achieved through the algorithmic interpretation of data processors in the decision-making process, and personal data empowerment also gives individuals the right to know, express, raise objections, and request manual intervention in the whole process of algorithmic decision-making to ensure the procedural justice of algorithms. This model is considered a "technological due process" in response to AI technology. The specific rollout will be adapted to emerging technologies, but the core elements will remain consistent with traditional due process.

But this kind of arithmetic system embedded in due process is still essentially a procedural control over algorithms. Procedural control aims to provide "procedural risk prevention measures to reduce the probability of algorithmic damage, the breadth and depth of damage" in the process of algorithm operation, but it "can neither deterministically avoid the occurrence of an algorithm damage, nor can it provide a normative and perceptible repair that meets the requirements of human reason and justice for the algorithm infringement that has already occurred". These shortcomings have been mentioned in scholars as they discuss the empowerment model. For example, in view of the fact that Article 24, Paragraph 3 of the PIPL stipulates that the right to refuse to restrict automated decision-making is only an ex post remedy, rather than a whole-process individual participation, some scholars have tried to adapt it by adopting the idea of GDPR, arguing that the legislation does not limit the right to refuse automated decision-making to post-event remedies, but also includes the right to know, object, and actively intervene in the process of automated decision-making, knowing and objecting to the processing of personal information by means of fully automated decision-making, before the automated decision-making begins. i.e. full participation in the automated decision-making process. However, even if the GDPR gives data subjects a full-cycle and systematic system of individual algorithmic rights, there will still be significant omissions in responding to algorithmic decisions. Whether it's program control as a façade or entity empowerment embedded in it, what is missing is the strict control of algorithms entering public decision-making at the source.

Since it is impossible to block the application of an algorithmic decision at the source, it is certainly impossible to completely avoid the damage that the decision may cause. This can be seen by looking back at the normative approach of the traditional rule of law: if the administrative organs are not demarcated with the help of legal reservations before they exercise their powers, and only relying on procedural control during the matter or the procedural participation of the counterpart as a check and balance, it is impossible to prevent the abuse of administrative power and prevent the occurrence of rights infringement. The reason for this is that, in principle, the protection model of subject empowerment can only be effective when both parties are equal and there is no obstacle to the exercise of rights, and when there is a disparity in power between the two parties, empowerment alone is not enough to change the power/right gap. Moreover, in public law, individual rights do not correspond to the objective obligations of the state, and "rights are always individual and enumerative, and it is difficult to integrate the institutional self-regulation of authority and procedures in public law, as well as the protection of the abstract rights of citizens as a whole, into the system of individual rights". This is also the reason why public law, in addition to constructing a complete system of individual public law rights, also emphasizes the limitation of the scope of authority of public authorities and the obligation to objectively abide by the law.


4.2 The issue of the integration of public and private affairs in the Personal Information Protection Law

Algorithms can vary significantly depending on who is used, who they are aimed at, and what problems they are involving. Different scenarios naturally lead to different regulatory methods. The problem with Article 24 of the PIPL is that it is mainly based on the application of algorithmic technology to the private life and business fields, and does not consider the difference between the application of algorithmic technology to the private and public spheres.

This is in turn related to the consistent normative model of the PIPL. Article 33 of the Personal Information Protection Law stipulates that "this Law shall apply to the processing of personal information by state organs". This shows that China applies the same treatment of state organs and other information processors to the issue of personal information protection, and in principle applies the "integrated adjustment" model of the same legal framework. However, the recourse of both different types of data activities to a regulatory framework based on private scenarios will inevitably lead to a misfocus and relaxation of the regulatory effect of public authorities.

As far as information processing is concerned, the processing of personal information by private institutions is governed by the core principle of "informed consent", with the goal of avoiding the derogation and inhibition of their data personality by giving individuals control over their own data. The reason why this principle can become the legal basis for regulating the processing of personal information by private individuals is that the information processor as a private person and the data subject are essentially a reciprocal exchange relationship. From this perspective, we can also find that the GDPR has a risk identification and early warning mechanism for ex-ante algorithms, such as "informing the data subject of the existence and subsequent consequences of automated decision-making and identification analysis" (Article 60 of the preamble), "providing the data subject with contact details indicating that any decision to object to the objection can be reconsidered" (Article 22(3)), "the data subject has the unconditional right to object to automated processing for the purposes of direct marketing; The right to object to the processing of data (including profiling) is also the right to object to the processing of the data, including profiling, unless there are overriding legitimate reasons for the interests or freedoms of the data subject" (Article 21), which is in essence an extension of the principle of informed consent.

However, in consideration of the fact that the application of notification consent to the performance of public duties will affect or even undermine the state's law enforcement capacity, the PIPL has exempted the acts of state organs that are "necessary for the performance of statutory duties or obligations" from the application of the consent principle, and even the obligation to inform has been reduced to "limited notification", that is, if there are circumstances stipulated by laws and administrative regulations that should be kept confidential or do not need to be notified, the individual may not be notified. Since notification and consent can no longer play a core regulatory role in the data collection and processing behavior of public authorities, it also shows that the full-cycle data empowerment of parties derived from this idea, especially the practice of allowing them to know and refuse to know in advance, cannot effectively regulate the algorithmic decision-making of public institutions.

Moreover, unlike private institutions that apply algorithmic decision-making, public institutions resort to algorithms for decision-making, and in many cases, not only do they not need to obtain the consent of the individual, but the individual has little possibility of refusal. Take the "Beidou Disconnection Case" as an example. At the end of 2012, the Ministry of Transport issued a notice to clearly require that since January 1, 2013, the demonstration provinces used in the tourist chartered buses, buses, dangerous goods transport vehicles need to update the vehicle terminal, should be installed Beidou compatible vehicle terminal. Vehicles that fail to install or install Beidou compatible vehicle terminals in accordance with the regulations shall not be issued or verified road transport certificates. If it is not installed, it will not be issued or verified as a road transport certificate, and the counterpart, as the recipient of the decision, has no possibility of refusal. The application of digital technology by public authorities to perform their duties does not require the consent of the parties concerned, which is also implicit in other norms other than the Personal Information Protection Law. For example, Article 41 of China's 2021 revised Administrative Punishment Law does not require prior public consent as a requirement when administrative organs use electronic technology to monitor equipment, and the premise of its application is only "in accordance with laws and administrative regulations" and "subject to legal and technical review".

These realities and norms show that it is neither sufficient nor complete to regulate public institutions through the empowerment model constructed by the private sector using algorithmic technology. Specifically, if public institutions only rely on the thin empowerment provisions of Article 24 of the PIPL and lack the regulation of the boundaries of algorithmic decision-making, it will undoubtedly indulge the continuous combination of algorithmic technology and public power due to the lack of norms, and cause the individual's subjectivity to be eroded due to algorithmic manipulation.


5 Legal reservation as a framework for thinking about boundary delimitation and its considerations


In order to avoid the protection of individual dignity being completely submerged in the pursuit of technological welfare, it is undoubtedly necessary to set boundaries on the application of algorithms to public decision-making. Some scholars have also proposed that before algorithms are put into the application of public decision-making, it is necessary to construct a reasonable application list to clarify which areas of government governance can be completely left to algorithmic decision-making, which fields can be governed in the form of human-computer interaction and cooperation, and which fields should completely prohibit algorithmic decision-making. However, just as it is almost impossible to enumerate the positive and negative lists of conventional government powers, it is even more difficult to clearly delineate the boundaries of the application of algorithm technology to public decision-making in the face of dynamically developing algorithm technology and the hidden and diffuse algorithm risks. The treatment of this issue by extraterritorial limited legislation and doctrine is not only inconsistent, but also largely absent from the general consensus. However, even if a definite boundary cannot be reached, it is still feasible and useful to use extraterritorial experience to roughly determine the factors that should be weighed when considering the boundary, and it can at least provide a criterion and a framework for determining whether an algorithm can be applied to public decision-making.


5.1 Legal reservation as the basis for formal legality

The first principle used by the traditional rule of law to determine the boundaries of the powers of public institutions, especially administrative organs, is the reservation of law. The legal reservation determines the permissibility of the executive to take certain measures to intervene in society, and its logic is to leave the basic decisions of the state to the parliament with the most democratic legitimacy, so that the legislation plays a central role in guaranteeing fundamental rights and controlling the executive power.

5.1.1 Heightened legal holds implied in the GDPR

Legal reservations should be used as a boundary for the application of algorithmic decision-making by public authorities, and reference can be made to the GDPR. The three exceptions to the right to be exempt from automated decision-making under Article 22(2) of the GDPR are: first, the decision is necessary for the conclusion and performance of a contract between the data subject and the data controller; secondly, the decision-making is authorised by Union or Member State law to which the data controller is subject, which provides for appropriate measures to protect the rights and freedoms and legitimate interests of the data subject; Third, the decision is made on the basis of the explicit consent of the data subject. The second is clearly aimed at the applicability of algorithmic techniques to public decision-making. In order to avoid the generalization of the restriction, the EU Article 29 Working Group has further defined it in the GDPR Guidelines on Automated Individual Decision-making and Identification and Profiling, i.e. "The right to be free from automated decision-making will only be restricted if it is expressly authorized by law to do so, for the purposes of public interest, risk prevention or control, or to ensure the security and reliability of the services provided by the controller, and on the basis that appropriate measures have been put in place to safeguard the rights, freedoms and legitimate interests of the data subject." In such cases, the EU and its Member States shall take appropriate measures to safeguard the fundamental rights of data subjects and prohibit undue derogations from the right to be subject to automated decision-making on disproportionately grounds of public interest".

In fact, both the prohibition theory and the right theory are integrated considerations for the application of algorithms to private institutions and public institutions. If we treat algorithms differently between private and public institutions, and consider that public institutions should be prohibited and only allowed if they meet the exemptions provided by law, then the above provisions of the GDPR - public institutions must have a "legal authorization basis" in order to fully resort to algorithms to make decisions, and their purpose is "public interest, risk prevention and control, or to ensure the security and reliability of the services provided by the controller", provided that they "have put in place appropriate measures to safeguard the rights of the data subject, freedom and legitimate interests" – can be understood as a legal reservation for algorithmic decision-making. In terms of specific types, this type of legal reservation is clearly a "qualifzierte gesetzesvorbehalt" (aggravated legal reservation) in German law, i.e. the public authorities must not only have a legal authorization to restrict citizens' rights by taking such measures, but also require that the law meet certain preconditions, pursue a specific purpose or use a specific method.

5.1.2 The relationship between principles and exceptions

The application of legal reservation as an algorithm to public decision-making means, first of all, that the legislative position on this is that the principle prohibits and the exception is permitted, and the normative model is correspondingly expressed in the relationship between "principle and exception". In addition to the GDPR, such a model of relationship is exemplified by the provisions of the German Federal Administrative Procedure Act on "fully automated administration" (der vollautomatisiert erlassene Verwaltungsakt). According to section 35a of the Act, one of the prerequisites for fully automated administrative acts to be permissible is that they must be expressly authorized by a legal norm (Rechtssatz). Only normative authorization can entrust the making of administrative acts to the machine, which is a classic expression of legal reservation. The reason for the demand for such a "permissible reservation" (Erlaubnisvorbehalt) is, first of all, the guarantee of rights under the principle of the rule of law. As a "better decision-maker", algorithms will greatly improve administrative efficiency, but their inherent dehumanization will also lead to the loss of human subjectivity. Therefore, there is always a tension between government efficiency and rights protection in the application of algorithmic decision-making. In order to ensure that human beings are not radically alienated and based on the ideal model of case-by-case justice, decisions in information gathering, interpretation and decision-making by public institutions should in principle be made manually, and can only be delegated to machines on the basis of efficiency.

The requirement for specific normative authority also means that the legislator has an obligation to balance administrative efficiency with the protection of rights, and to specify separately in individual laws what matters and to what extent can be delegated to algorithms. This model, which is not uniformly regulated, but left to legislators on an individual basis, is also in response to the dynamic development of algorithmic technology and the updating of human understanding of artificial intelligence. There have always been two opinions on whether all decision-making power can be entrusted to algorithms in a general sense, and there have always been two opinions on strong AI and weak AI (starke und schwache KI-These). The former believes that machines can learn all of the thinking and problem-solving abilities of humans, and therefore can make all human decisions, and even perform more perfectly than humans; But the latter argues that human intelligence is not only about deriving the outcome of the problem, but also about determining the path to solve it, i.e., whether the problem is solved in some creative way or just by the usual way, which is difficult for machines to learn in the view of weak artificial intelligence. In reality, people's views are often torn between the above two tendencies, but this also shows that instead of making uniform provisions, legislators are required to weigh in specific legislation, and even ensure that efficiency and rights protection can support each other rather than crowd each other out, perhaps in order to avoid a quantum leap in law and public decision-making in the era of algorithms.

5.1.3 The "law" that has been relaxed in the legal reservation

If strictly interpreted, the "law" in the legal reservation should be the law enacted by the legislature, so as to implement the principle of legislative restraint on the executive. It is worth noting, however, that both the GDPR that "the decision is authorized by EU or Member State law to which the data controller is subject" or the German Federal Administrative Procedure Act on the need for a normative basis for fully automated administrative actions have relaxed the treatment of the law that is a prerequisite for permissible reservations. The term "legal norms" in German law is not limited to laws, but also includes regulations, orders, autonomy regulations, etc. This kind of relaxed legal reservation is essentially a "legal normative reservation", which means that under the premise of prohibiting it in principle, algorithms are still given a relatively broad space for public decision-making.

This relaxed stance is also reflected in China's Personal Information Protection Law. Article 13 of the Law enumerates the lawful basis for the processing of personal information, and takes item (7) "other circumstances stipulated by laws and administrative regulations" as a catch-up. This also means that, except for the circumstances expressly listed in this article, the processing of personal information must be expressly authorized by "laws and administrative regulations". In addition to the law, administrative regulations are also allowed to authorize, which can be said to be an expansion of the law in the legal reservation. Consistent with the catch-all provisions in Article 13(7), Article 34 of the PIPL also expands the normative basis for state organs to perform their statutory duties, stipulating that "the processing of personal information by state organs for the performance of statutory duties shall be carried out in accordance with the authority and procedures prescribed by laws and administrative regulations", which also means that in addition to laws, administrative regulations can also be used as the basis for statutory duties. Article 41 of China's Administrative Punishment Law on the use of electronic technology monitoring equipment is also consistent with the Personal Information Protection Law, "Where an administrative organ uses electronic technology monitoring equipment to collect or fix illegal facts in accordance with laws and administrative regulations, it shall go through legal and technical review to ensure that the electronic technology monitoring equipment meets the standards, is reasonably set up, and has obvious signs, and the location of the installation shall be announced to the public."

This shows that in terms of personal information processing and data technology utilization, part of the authority of the law has been granted to administrative regulations, and although this authorization does not conform to strict legal reservations, it is also in line with the reality of the implementation of legal reservations in China. Therefore, if the public authority directly applies the algorithm to make a decision involving the right to personal freedom and property rights, it will naturally be subject to the administrative punishment law and administrative coercion law of China; Even if it is not directly in the radiation field on which the above-mentioned behavioral law is based, as long as the public authority makes algorithmic decisions based on the processing of personal information, it should at least have an authorized basis in administrative regulations. The PIPL extends the lawful basis for the processing of personal information to administrative regulations, but does not grant local regulations, rules and other lower-level norms, and its purpose is also to prevent administrative organs from breaking the boundaries and empowering themselves without authorization by taking refuge from the change of decision-making methods.

5.1.4 Aggravated legal reservation as the applicable type

According to the experience of the GDPR, in addition to the "legal authorization basis", the purpose of legal authorization can only be for "public interest, risk prevention and control, or to ensure the security and reliability of the services provided by the controller" in order for public institutions to fully resort to algorithms for decision-making. In contrast to the "simple legal reservation", which is only a general authorization, the aggravated legal reservation also limits the legislator's competence, thus avoiding the legislative abuse that would result from submitting certain matters directly to the law without any restrictions. In view of the huge risk that the rule of law will be hollowed out, rights may be derogated from, and individual subjectivity may be eroded by the public authority by handing over the decision-making power to the algorithm, the law should not only have a legal authorization basis for the complete algorithmization of public decision-making, but also provide detailed provisions for the purpose pursued, the premise and the way of use of the authorization, that is, the aggravated legal reservation should become the primary choice for specific legislation. In particular, the legislator should be defended against disproportionately derogating from the data rights of the parties on the grounds of abstract public interest, such as "to maintain public safety" and "in response to a public health emergency", and the legislator must also concretely frame and explain the general and broad abstract public interest in the specific enabling law. In addition, if public authorities are allowed to apply algorithms to make decisions, they should also be subject to the obligation to take appropriate measures to ensure the rights and legitimate interests of data subjects. This is an aggravating reason for legal reservations, which also aims to implement the principle of risk allocation: data processors and algorithm users, as risk creators, should be subject to more risk prevention responsibilities, so as to ensure the proportionality between the benefits of each party and the risk bearing.


5.2 Factors to be considered when authorized by law

As a formal framework, the legal reservation establishes a model of the relationship between prohibitions in principle and permissibility of exceptions in the application of algorithms in public decision-making, and also sets out the specific obligations of legislators to properly balance the effectiveness of government with the protection of rights. What factors should be considered in the legal norms when granting exceptions can not only refer to the traditional principle of legal reservation, but also incorporate the characteristics of algorithmic decision-making. These considerations are relevant not only to whether the law should make a decision on delegation, but also to determine the strength of the delegating law's stringent or lax regulation.

5.2.1 Guarantee of fundamental rights

The original essence of legal reservation is to protect rights, and it requires that as long as administrative activities involve the basic rights of individuals, there must be a clear legal basis for conduct. This is known as the rule of law aspect of legal reservations: because of the pre-national nature of rights, the executive needs to have legislative authorization and individual consent to intervene in rights. Thus, even if the administration replaces the decision-making tool with an algorithm, it is still subject to legal reservations as long as the decision touches the fundamental rights of the individual. However, the definition of which fundamental rights are subject to strict legal authority and which fundamental rights may be subject to other norms under the law varies from country to country. The typical German and Japanese public law has advanced from the past infringement reservation to the important matter reservation, which also means that whether the fundamental rights belong to the legal reservation is no longer distinguished according to its type, as long as the administrative decision is related to the fundamental rights, it is within the scope of the legal reservation, and the difference is only the normative strength of the enabling law.

When it comes to legal reservations in the strict sense, China's position is still relatively conservative, mainly involving the right to liberty and property rights in the basic rights. According to Article 8 of the Legislation Law of the People's Republic of China, "Offences and penalties; deprivation of civil and political rights, coercive measures and penalties restricting personal liberty; The expropriation and expropriation of non-state property "can only be enacted by law." China's Administrative Punishment Law and Administrative Coercion Law also reserve the right to create penalties and compulsory measures involving personal freedom to the law. These are typical expressions of encroaching reservations. Although it only provides a bottom-line protection, the infringement reservation, which is expressly enshrined in the relevant law, cannot be exempted from it because the way in which the power of a public body is exercised has been replaced by a human being. If the result of algorithmic decision-making is to generate a legally effective decision that directly affects the parties' freedom rights and property rights, rather than just pure decision-making implementation or decision-making assistance, then the application of such algorithms must be expressly authorized by law, and the subordinate legal norms obviously cannot become the basis for legitimacy. This also means that the right to freedom, including the right to life and personal liberty, should be protected to a higher degree, and legislators should be more strictly constrained when they hand over public decision-making related to these fundamental rights to algorithms, and when these rights cannot be fully guaranteed, full algorithmic decision-making should be explicitly prohibited.

5.2.2 Risk controllability and hierarchical protection

In fact, algorithmic technology has become a double-edged sword, which brings convenience and efficiency, but also introduces major risks, and sometimes this risk can even spill over the technical system and evolve into the domination and suppression of people. Therefore, some countries apply the model of hierarchical protection and supervision based on the risks that may be caused by algorithmic decision-making and the controllability of risks, combined with specific scenarios. Canada's Directive on Automated Decision-making in 2019 exemplifies this tiered protection mechanism. It divides automated decision-making into four levels from four dimensions: individual or collective rights, individual or collective health and comfort, individual, entity or collective economic interests, and ecosystem sustainability: first-level automated decision-making usually has a reversible and short-term impact on the above factors; Second-level automated decision-making has a reversible short-term impact on the above factors; The three levels of automated decision-making have a persistent impact on the above factors that are difficult to reverse; Level 4 automated decision-making has an irreversible and permanent impact on the above factors. There are two factors that are considered in determining whether to delegate public decision-making power to algorithms: one is the type of rights of the parties affected by automated decision-making. If algorithmic decision-making involves major rights such as people's right to life or personal freedom, it should not be completely left to the algorithm, which continues the idea of legal reservation. The second is whether the risks arising from automated decision-making are reversible and temporary, or irreversible and permanent, in which case strict limits should be set for the application of algorithms.

Canada's tiered protection provides an idea of whether it can be authorized based on factors such as the magnitude and intensity of the risks posed by the technology and whether it is reversible. This line of thinking continues the typical thinking of legal regulation technology: if the application of a technology may bring immeasurable and irreversible risks, and is also unanimously criticized at the moral and ethical level, it should be completely prohibited by law, such as gene editing technology; If the application of a certain technology can bring greater benefits but the risks are difficult to predict, and the political and social risks contained in it will even pose a serious threat to freedom, democracy and human rights, it should only be applied exceptionally and prohibited in principle. The GDPR's stance of prohibiting facial recognition technology in principle is also based on this consideration; If the benefits of a technology outweigh the risks, and the expected risks are manageable, it should be allowed to be applied in a compliant manner. This idea of differentiating the use of technology according to the degree of risk and its controllability can also be incorporated in determining whether algorithms can be applied to public decision-making.

5.2.3 Value judgment and discretion as a no-go area for decision-making

One of the considerations in the application checklist is the creation of forbidden areas for the application of algorithms to public decision-making, and this thinking is indeed implemented in the 2007 Loomis decision in the United States and the German Federal Administrative Procedure Act, so it can also be a prohibitive provision authorized by law.

(1) The Loomis case and the issue of value judgment

In the Loomis case, the trial court sentenced the defendant Loomis to six years' imprisonment and five years of community supervision, referring to the assessment report made by the COMPAS risk assessment tool. This assessment tool assesses the risk of recidivism of the defendant based on interviews with the defendant and information obtained from the defendant's criminal history. However, the defendant argued that the court's sentencing based on the results of the COMPAS assessment violated his right to an individualised sentence, and that the COMPAS report only provided data relevant to specific groups, thus also violating his right to be sentenced on the basis of accurate information.

Although the Court of Appeal ultimately upheld the original verdict, the presiding judge reminded in the judgment that the risk score cannot be used to "determine whether the offender is imprisoned" or "determine the severity of the sentence", and that algorithmic decision-making tools can only play an auxiliary role in the field of judicial adjudication, and must not be a substitute for judges. The reason for this is not only because of the limitations of algorithms in terms of data quality, quantity, computing power, etc., and that algorithms may inherit human biases, which need to be artificially corrected, but also because "the work of interpreting and applying legal provisions itself contains the requirements of value judgment, and this task must be completed by human judges with empathy, rather than purely resorting to technical rationality, otherwise the subjective status of human beings may be threatened". Here, we can find that the algorithm distilled in the Loomis case is applicable to the primary substantive boundaries of special public decisions such as criminal justice: if a public decision involves conflicts of interest and value judgments, it cannot be left to the algorithm.

This understanding is now widely accepted, and value judgments have become a physical off-limits for algorithmic public decision-making. The reason is that value judgment is a work that relies on life experience and decision-making early warning, and because the social environment cannot be fully digitized and symbolized, algorithms are often unable to comprehend and deal with the human emotions and understandings needed to resolve value conflicts, and algorithms do not have the empathy and sympathy for others generated by human civilization inheritance and life experience. From conviction and sentencing to the overall process of public decision-making, the above conclusions will be extended to the following: fact identification and evidence fixation can be carried out by algorithms, and algorithmic decision-making often has higher accuracy and more efficiency than human beings in the processing of this part; However, the application of the law and the making of decisions cannot be left entirely to the algorithm. Because the application of law is not necessarily deductive and implicit, it is also necessary to apply the "soft adjudication factors" such as intuition or legal sense, and when it comes to cultural context, it is even more important to have invisible knowledge or unconscious, which is undoubtedly not possessed by machines. In areas where it is necessary to renew the law through case-by-case resolution, the machine is bound to fail.

In addition to the limited capabilities of the machine, the deeper consideration for making value judgments a no-go zone is that it can lead to an irresponsible application of law. The law is always connected with responsibility, which is also a common condition and moral requirement for the exercise of power/right, and the person who exercises power over another person must face that person and take responsibility. But the machine does not vouch for its adjudication, let alone for the reasons for adjudication. It doesn't treat others as human beings, and it doesn't understand and show respect", it's the power without responsibility. Accordingly, such terrible power cannot be easily indulged into judicial decisions and major decisions, and "any step in this direction would be labeled a taboo experiment in legal ethics."

(2) The prohibition of the discretionary application of fully automated administration in the German Federal Administrative Procedure Act

In addition to value judgments, another substantive boundary refined by the Loomis judgment is that if there is room for discretion in a public decision, the public authority is also required to make the most appropriate treatment on a case-by-case basis, and this decision cannot be left entirely to the algorithm. Because the algorithm results rely too much on clustered data, the algorithm technology also provides standardized processing, which not only conflicts with the judge's discretion, but also affects the parties' right to obtain personalized judgments. Therefore, in addition to value judgments, whether public authorities enjoy discretion has become another reference for weighing whether the algorithm can be used for administrative tasks. "Highly uncertain tasks that require greater reliance on human discretion cannot be left to algorithms", also as stated in the German Federal Administrative Procedure Act. Article 35A of the same law, in addition to requiring a normative basis, also stipulates that only restraint acts can be applied to fully automation, and when there are uncertain legal concepts and discretion in administrative acts, the application of fully automated administration should be strictly excluded.

From the perspective of its meaning, value judgment in the Anglo-American context corresponds to the judgment margin in continental law and the narrow discretion in effect discretion, and the reasons for the two as forbidden areas for algorithmic decision-making also overlap with each other. Although from a technologically rational point of view, automated decision-making appears to reduce discretionary arbitrariness, improve its consistency and objectivity, and avoid the typical mistakes made by humans in haste or carelessness. However, it is not able to collect all the discretion-related information with mathematical models, so its case-handling capabilities are limited. Especially in the application of law, decisions are often determined by semantic determination and interpretation and value trade-offs, and machines are obviously not competent for this work, and they will also face great difficulties in the selection of conflict targets and the distribution of rights and interests, and "legal deductive reasoning (Justizsyllogismus) is still far from being transformed into an automated implication process (Subsumtionsautomaten)". As a result, at this stage, "algorithmic decision-making is still limited to rechenopreation in a purely formal sense, and it is up to humans to make decisions if legal concepts require semantic determination or interpretation, or if norms give the administration room to supplement, rate, and make decisions".

However, the argument that no algorithm can be applied at all discretionary levels has also been questioned. From the perspective of legal technology, the exercise of discretion can be roughly divided into two stages: the first is the general abstract discretion (allgemein-abstrakte Ermessenausübung), e.g. the establishment of discretionary benchmarks through the formulation of administrative rules; The second is specific discretion, i.e. the public servant makes the final discretionary decision on a case-by-case basis with reference to the above-mentioned criteria. Looking at the above steps carefully, it is actually feasible to leave the first stage of discretion to the algorithm. However, in the second stage, because it is necessary to concretize the uncertain legal concept under incomplete information in order to make a case-by-case evaluation that conforms to the characteristics of the matter, it is necessary to have manual intervention, otherwise the danger of rights protection will arise. However, the objection further argues that even if the structure of things can be woven into an algorithm through computation (Berechenbarkait), and the AI system can answer legal questions, it "cannot explain the answer or make a legal argument", which is also inconsistent with the justification requirement under the rule of law and is therefore still illegal. The use of value judgment and discretion as a forbidden area for algorithmic decision-making is also in line with people's core assumptions about the field of application of artificial intelligence: in highly complex and unforeseeable fields, the so-called "VUCA domain", the application of machines should be excluded as much as possible, because the matters related to values, goals, willingness, motivations, interests, emotions, etc., are related to human rationality and human subjectivity (Subjektivität).

5.2.4 The type of algorithm and the data involved are other considerations

In addition to the type of rights, degree of influence, and risk level affected by algorithms, the type of algorithms and the data involved can also be factors to consider whether the law can allow the application of algorithms in public decision-making.

(1) Algorithm type

At present, algorithms are widely used in public scenarios such as administrative approval, traffic law enforcement, credit assessment, tax inspection, and risk prevention and control, and can be roughly classified into types such as algorithm approval, algorithm assistance, and algorithm prediction. The algorithmic approval represented by the "second approval" in the field of market supervision resorts to "artificial intelligence + robot", and establishes an electronic and intelligent commercial registration mode for the whole process of declaration, signature, review, licensing, publicity and archiving, and the machine instead of manual realizes the second approval of applications that meet the requirements of standardization and standardization. However, it is necessary to consider whether this method of approval can be extended to all administrative approval matters. There is still a need to distinguish and deal with the matter according to its complexity and certainty: if the approval matter has a clear procedural step and a predictable decision-making conclusion, it is obviously appropriate to apply more accurate and efficient algorithmic decision-making; If there are steps in the approval process that have a high degree of complexity and uncertainty, they should not be left to the algorithm.

In addition to the type of algorithm, the role of algorithms in public decision-making, that is, whether algorithms are only used as auxiliary tools for public decision-making or completely replace public power as real decision-makers, is also an important indicator to determine the boundaries of the application of algorithms. Complete algorithm decision-making is an operating mode in which the system automatically collects, analyzes, and makes decisions without human intervention. This kind of algorithmic decision-making not only directly targets individuals to make legally valid decisions, but also has the characteristics of "instant execution and self-realization". Since the algorithms that realize the execution of decisions have a greater impact on individual rights than those that assist decision-making or only serve the pure execution of decisions, they should also be subject to stricter constraints, and the normative strength of legal norms is naturally stronger.

Algorithmic prediction is when an algorithm predicts an individual's future behavior based on past data, and allows or deprives the individual of behavioral choices based on the prediction results. The decision-making process of algorithmic prediction relies on the correlation between the data and the inferred results. However, this artificially constructed cognitive model of relevance is only a cognitive method, not the only cognitive method. The process inevitably ignores many other social, cultural and contingent factors, and makes the mistake of inferring what should be true from the actual and judging the future from the past. The use of risk prediction algorithms in parole and sentencing is not only quite inconsistent with the spirit of presumption of innocence, but also negates the procedural safeguards in the principle of presumption of innocence. Accordingly, if a predictive algorithm is applied to make a decision that has legal effect directly in public decision-making, it should be prohibited in principle. However, if the algorithmic prediction is only a risk identification and does not make a decision that is directly legal for the individual, the application boundary should be moderately relaxed. For example, public security organs in many parts of China have now begun to apply the crime prediction system. This kind of algorithmic decision-making will turn the management of criminal cases from after-the-fact crackdown to pre-prevention, and the public security organs will also allocate more police force to areas with higher crime risk based on algorithmic predictions. Although this kind of algorithmic decision-making also plays a role in allocating public resources, it does not form a direct and legally effective decision, and the impact on individual rights is still indirect. However, it has also been pointed out that the crime prediction carried out by algorithms will cause citizens in a certain area to be more vigilant and more intensively investigated, and the originally limited police force will be unevenly distributed, and the police will consciously or unconsciously lower the standard of reasonable suspicion, which will then create unequal psychological and objective facts in individual cases. Therefore, how to balance the public welfare and the danger of abuse of government power in the prevention of crime through algorithms is also a difficult problem that needs to be solved urgently in the future.

(2)The type of data involved

From the perspective of the type of data involved in the algorithm, sensitive personal information is specifically protected by law because it is directly related to the personal dignity of individuals. Paragraph 2 of Article 28 of the PIPL stipulates that "personal information processors may only process sensitive personal information when there is a specific purpose and sufficient necessity, and strict protective measures are taken." Laws and administrative regulations may also make special provisions on whether the individual's written consent should be obtained for the processing of sensitive personal information, and whether relevant administrative permits or other restrictions should be obtained. All this shows that if algorithmic decision-making is based on sensitive personal information, it must have an authorized basis in laws and administrative regulations. On this basis, Article 26 of the PIPL is re-examined, although it does not directly stipulate that the installation of image collection and personal identification equipment in public places must have a legal authorization basis, and only requires that the above-mentioned acts "shall be necessary for the maintenance of public safety, comply with relevant national regulations, and set up reminder signs for restrictions", if other provisions of the PIPL are taken into account, if the public authority uses facial recognition devices in public places, even if it is for the purpose of maintaining public safety, but because it involves sensitive personal information, There should also be an authorized basis for laws or administrative regulations.


6. Algorithmic impact assessment as a procedural safeguard for delineating entity boundaries


The legal reservation can be used as a boundary for algorithms to be applied to public decision-making, because it first provides the boundaries of the rights of algorithmic decision-making, that is, if a completely automated decision-making involves the fundamental rights of individuals, whether it is permissible should be a legislative decision, rather than a matter decided by the administrative agency independently. But this principle has also been transformed into a democratic element: even if certain matters cannot be understood as infringing on the rights of individuals, they should be subject to legal reservations if they are at stake in the public good. Matters of public interest should be decided by the legislature, which has the "prerogative to mediate conflicts" only under the constitutional distributive order. However, the emphasis on reserving important matters to the legislature is in essence to give play to the democratic function of legislation, that is, to realize the people's control over the executive through the control of the executive through the legislature. Therefore, if we further extend the "legal control" in legal reservation to "people's control", then before the current law stipulates whether the public authority can use the algorithm to make a certain decision, it is also a preventive means to help delineate the decision-making boundary, and it is also a procedural guarantee for legal reservation.

The algorithm impact assessment system originated in the Algorithmic Accountability Act of 2018. The law aims to establish a standardized evaluation system to conduct a prior review of the algorithms that will be put into use, so as to objectively assess the consequences of their application. The Canadian government followed suit in 2019 with the Directive on Automated Decision-making, which attempts to systematically build an algorithmic impact assessment system guided by core principles such as transparency, accountability, lawfulness, and procedural fairness. Both bills first apply algorithmic impact assessment to public decision-making, and their goal is to resolve the governance dilemma of algorithms being applied to public decision-making. Although there is no specific algorithmic assessment in the GDPR, from the perspective of the matters required by its Article 35(1) to make a data impact assessment mandatory, it also involves the application of algorithms in public decision-making: first, a systematic and extensive assessment of a natural person's personal characteristics based on automated processing, including profiling, and decisions made on the basis of this assessment that have legal effects or similarly significant effects on the natural person; second, the large-scale processing of special categories of data under the GDPR or the large-scale processing of data for criminal offences and offences; Third, large-scale systematic monitoring of public areas. In its published Data Protection Impact Assessment (DPLA), the EU Article 29 Working Group states that the purpose of the assessment is to describe the process of data processing, to assess its necessity and proportionality, and to manage the risks that may arise from data processing.

As a means of prevention in advance, algorithmic assessment includes two aspects: technology, risk analysis and public participation, and external audit. On the one hand, in view of the high complexity of algorithms, it is no longer entirely feasible to rely on traditional boundary delineation and governance models, and it is necessary to dynamically evaluate the whole process of algorithm design, deployment, and operation from the technical level to identify, track, and correct the built-in or potential biases of algorithms in advance, and increase the robustness and controllability of algorithm operating systems. On the other hand, algorithm evaluation not only requires a technical evaluation of algorithm design, but also requires that procedural guarantees and participation channels for stakeholders be given through systems such as information disclosure and public participation in the evaluation process, so as to supplement and strengthen the democracy and appropriateness of algorithm decision-making. Through prior impact assessment, algorithmic decision-making is no longer just carried out in a closed technical system, but has become a matter that can be widely participated by relevant stakeholders and have a substantial impact.

China's Personal Information Protection Law is modeled on the GDPR, and also provides for a similar personal information impact assessment in Article 55, and the matters that need to be assessed include "using personal information to make automated decisions". Pursuant to Article 55, such an impact assessment includes: "whether the purpose and method of processing personal information are lawful, justified and necessary; impact on personal rights and interests and security risks; whether the protective measures taken are lawful, effective and proportionate to the level of risk".

However, compared with the US Algorithmic Accountability Act and the EU GDPR, we still find that although the PIPL provides for the impact assessment of automated decision-making, there are obvious shortcomings in this extensive algorithmic assessment. First of all, Article 55 of the PIPL only stipulates that personal information processors are obliged to conduct a personal information protection impact assessment in advance and record the processing. Although the record is to implement the principle of responsibility of personal information processors and help to prove whether the personal information processing activities carried out by them comply with the requirements of laws and administrative regulations, it is not clear from the norms whether such personal data impact assessment can prevent public authorities from applying an automated decision-making at the source. This may prevent the assessment from performing its pre-emptive function due to inadequate enforcement safeguards. Second, compared with the relatively clear technical framework and index system for algorithm assessment in the Algorithmic Accountability Act of the United States and the Directive on Automated Decision-making of Canada, although China provides for personal information impact assessment, the assessment content is not concrete. Article 56 of the PIPL lists more assessment items than general data protection impact assessments, lacks the pertinence of algorithmic decision-making, and may also be difficult to solve the problems of discrimination, bias and opacity in the operation of automated decision-making systems. Thirdly, as mentioned above, it is more necessary to enhance public participation and information disclosure when algorithms are applied to public scenarios than those applied by private institutions, so as to maximize the democratic function of the algorithm evaluation system, but this is also not required in the PIPL. Past practice has also shown that although China's public institutions widely use algorithmic decision-making in the fields of intelligent security, financial risk control, urban construction supervision, public health prevention and control, police forecasting and judicial trial, they rarely provide channels for public participation in the formulation, evaluation, objection and relief links, and do not pay attention to strengthening the public disclosure of assessment results on the basis of self-assessment, which makes algorithmic assessment only stay at the level of security assessment and risk prevention, and does not embed the protection of public participation and procedural rights. Finally, algorithmic evaluation is not only a preventive tool, but also a component of accountability. In order to ensure the achievement of the above objectives, external accountability and audit forces should be incorporated on the basis of self-assessment by algorithm designers, deployers, and operators. Therefore, the U.S. Algorithmic Accountability Act adopts a dual-track system of self-assessment and government evaluation. In addition to self-assessment, the FTC conducts an independent third-party assessment with independent auditors and technical experts for "high-risk automated decision-making systems" under the Act. In Canada's Directive on Automated Decision-making, such an independent third-party assessment is complemented by effective enforcement safeguards, and any appropriate and acceptable measures may be taken by the Finance Committee in the event of an effective performance of the assessment obligation and subject to responsibility. However, this requirement is also missing in the PIPL.

Accordingly, in order to resolve the governance problem of the traditional rule of law on the application of algorithmic decision-making by public institutions through algorithmic impact assessment, it is necessary for the national cyberspace administration to take the lead in organizing an integrated assessment mechanism in the future, promulgating more detailed algorithmic impact assessment standards, and on the basis of fully considering different types of algorithmic risks, and referring to factors such as specific application scenarios, decision-making risks, applicable departments, and consequences for data subjects, a categorical assessment framework and more detailed assessment standards should be introduced. In addition to technology and security, the evaluation framework for the application of algorithms in public institutions should also consider the inclusion of public participation guarantees and accountability mechanisms, so that algorithm evaluation can be effective in places that cannot be covered by traditional means such as legal reservations.


7. Conclusion


The core of the rule of law is always how to prevent the expansion and abuse of state power, so that individuals are not completely reduced to tools and objects. Therefore, when state power is combined with algorithmic technology, it is necessary to be vigilant against the emergence of unfettered hegemony, and it is also necessary to ensure that the fundamental purpose of technology is always to improve the well-being of mankind, and it cannot be allowed to degenerate into a tool for public authorities to dominate and suppress individuals. Harari once warned in A Brief History of the Future: "Once power is handed over from humans to algorithms, the issue of humanism may be eliminated." As long as we abandon the human-centric worldview and embrace a data-centric worldview, human health and well-being will no longer be so important...... Humans can be demoted from designers to chips, then to data, and finally dissolved and dispersed in the torrent of data, like a piece of dirt in a rolling torrent." In the era of artificial intelligence, technology has transcended the status of objects dominated by humans for the first time, and human subjectivity has been challenged like never before. The dissolution of human subjectivity and the decline of the concept of human beings continue to intensify due to the algorithmization of decision-making driven by commercial interests and the needs of government supervision. All of this requires a positive response from modern law. However, the regulation of algorithms by law cannot only stay at the tool level, but must be based on human subjectivity. Whether it is to give individuals systematic data rights, or to use the obligation of data processors to disclose, interpret and evaluate algorithms, or to explore the substantive boundaries of the application of algorithms to public authorities, the ultimate goal is to ensure human subjectivity and autonomy, so that they will not be encroached upon by the application of emerging technologies, and the purpose of the rule of law to restrain public power will not be frustrated by the advent of the era of artificial intelligence.