Evasion of Algorithm and Regulation from the Perspective of Sociology of Law
Author: Qiu Yaokun
Lecturer, Law School, Capital University of Economics and Business
Abstract: Although algorithmic power is powerful, it may still be evaded and there is a continuous dynamic game relationship between it and its governing objects. The basic strategies for avoiding algorithms include avoiding becoming governance objects, adjusting to meet governance requirements, and confusing governance entities. The cause of this phenomenon lies in the limited rigidity of technology, which does not match the measurement of interests required to handle marginal governance issues. It cannot convert all "standards" into "rules" and cannot solve the fundamental social contradictions behind the diverse normative order. However, most people are not algorithm evaders, and algorithm power is generally effective; A few algorithmic evasions are also beneficial for limiting algorithmic power, balancing governance and freedom, and promoting social stability and comprehensive efficiency of governance. Just adhere to the principle of inclusive and prudent regulation, while strengthening the regulation of simple algorithm avoidance through technology, maintain tolerance for complex evasion of algorithm, and play the regulatory role of social norms.
1 Question raising
Due to the endless emergence of practical problems such as Big data killing, new employment security, and robot judges, the current legal research on algorithms mainly focuses on the side of power limitation: or it requires that the running process of algorithms can be supervised and controlled, and the algorithm transparency and algorithm interpretation are used to oppose the black box and dictatorship of algorithms; Or require the operation results of the algorithm to be legal and fair, and eliminate algorithm failure, discrimination, and damage; Even take precautions to further consider the legal status of (strong) artificial intelligence, and advocate or need to regulate it as a Legal person rather than a simple object, thus deriving the copyright protection of artificial intelligence works, the responsibility distribution of autonomous vehicle and other issues. These studies are ultimately condensed into laws and regulations such as Article 24 of the Personal Information Protection Law, Article 2 of the Guiding Opinions on Implementing the Responsibilities of Online Catering Platforms to Effectively Safeguard the Rights and Interests of Delivery Personnel, and the Regulations on the Recommendation and Management of Internet Information Service Algorithms, as well as the Guiding Opinions on Strengthening the Comprehensive Governance of Internet Information Service Algorithms.
The potential assumptions of the aforementioned research on power limitation are that algorithmic power may be too powerful to resist, and have a negative impact on individual rights and reform and innovation. However, empirical observations show that under specific conditions, algorithmic power may also be relatively weak, and the governed object may discover patterns and evade them, resulting in algorithmic power being unable to achieve the established goals. This evasion may rely on equally powerful reverse technical means, but in most cases it often lacks technical content and is just a new variant of the "policy at the top, countermeasure at the bottom" in the algorithm era. In contrast, the word "intelligence" in artificial intelligence governance is full of irony.
Therefore, studying algorithmic power has limitations, and one cannot only focus on the possibility of power loss, but must also pay attention to the marginal utility of it; We cannot simply describe the algorithmic power governance as a victim waiting to be rescued, but should also pay attention to its initiative and dynamically analyze the game process between the two parties in the power relationship. In fact, this is a sociological research perspective that focuses on the gap between "law in books" and "law in action". In the context of multiple norms, it examines the interaction between various parties with the special norm of law as the background, and even does not rush to propose improvement plans for the current situation, knowing that breaking the current balance will bring unpredictable consequences, And commonly believed harmful behaviors often have unexpected social functions. This perspective helps to comprehensively understand the underlying causes and practical impacts of relevant legal phenomena, thereby making the proposed solutions more cautious and effective, avoiding improper or excessive exercise of public power, and instead adding regulatory risks, causing the relationship between power and rights to fall into a worse balance. The benefits of the new order have not been reflected, while the drawbacks of disrupting the old order have already occurred. Therefore, this article will start from the perspective of sociology of law, analyze the main strategies, internal causes, actual consequences, and functions of algorithm avoidance, reveal the operational boundaries of algorithm power, and carefully propose regulatory plans for algorithm avoidance, in order to reflect on algorithm regulation and even the general regulation of technology by law.
It is worth noting in advance that the subjects of algorithmic power include both public and private aspects, and the norms supported by algorithms also include legal and non legal social norms. Therefore, there is a distinction between "algorithmic law" and "algorithmic law". However, for the topic discussed in this article, the above distinction has limited significance. Regardless of the subject supported by the circumvention algorithm and its normative nature, its types, causes, and consequences are similar. Moreover, many social norms are contractors of legal governance, and the goals of the two are consistent. In addition, due to the fact that internet platforms, as the most important subjects in the application of algorithms, have the characteristics of public governance, blurring the boundaries between public and private, and there is no need to distinguish between them. Therefore, the discussion in this article will apply algorithms that blend public and private subjects, making necessary distinctions only in a few cases. Because of this, the "governance" of algorithmic power discussed in this article mainly refers to governance through algorithms or algorithmic power as a means of governance, which is different from governance that restricts algorithmic power. However, this article will also discuss the issue of restricting algorithmic power and algorithmic avoidance, mainly using "regulation" as the keyword. Therefore, the terminology is specifically explained here to avoid confusion among readers.
2 The main strategies for Evasion of algorithm
Evasion of algorithm refers to the behavior of those who should be governed by algorithm power by implementing specific actions, ultimately avoiding the governance of algorithm power. For example, according to the principle of search engine Sorting algorithm, websites nest qualified pages containing hidden content, so that even if they are not related to the search content, they can be retrieved and ranked top. For example, users summarize the rules of social network content review algorithms and publish politically sensitive, plagiarized, pirated, or vulgar pornographic content that should have been blocked through homophonic, copywriting, and adding blank content. For example, the "Network Water Army" continues to pile up false positive or negative reviews through account cultivation, manual scoring, and other methods based on its gaming experience with the scoring platform's anti scoring algorithm. The core of this behavior is to avoid the consequences of algorithmic power while violating algorithmic requirements, as governed by algorithmic power. To achieve this goal, there are three basic strategies based on experience.
2.1 Avoid becoming a governance object
The primary means of algorithmic evasion is to avoid becoming the governing object of algorithmic power, leaving oneself outside the scope of algorithmic application, just like in traditional offline scenes where illegal and criminal actors avoid surveillance cameras, police patrols, and possible law enforcement to evade public power. In fact, the issue of "unregulated nature" at the beginning of the rise of the Internet is an extreme and typical example of this strategy. The early Internet architecture guaranteed the full anonymity of online actors. Their identity, location and behavior were unknown, so they could not be regulated, let alone adopted algorithmic means. However, as network architecture moves from openness to closure, the story of "no one on the internet knows you're a dog" has become history. The cyberspace has become fully regulated and has extended deeply offline, blurring the traditional distinction between online and offline. At present, it is extremely difficult to completely avoid becoming a governance object of algorithmic power unless one chooses a few behaviors or even lifestyles that do not involve the Internet at all (such as using cash or traditional letters).
But it is not entirely impossible anymore. Platform selection is one of the new and effective means to avoid becoming an object of algorithmic power governance. Due to the different development stages of different platforms, there is still a possibility of emerging platforms' illegal rise 'in the overall environment of closed cyberspace, especially when disruptive innovation and cross-border competition occur. Even if the legal level of different platforms is equivalent, the data and information barriers between platforms can still block the governance behavior of algorithm power across platforms, helping illegal and irregular behaviors across platforms. Therefore, the boundaries between platforms are like online and offline boundaries, dividing the unified network space again, making the applicability of unified algorithms no longer valid. Algorithm evaders or those who have inadequate systems to utilize emerging platforms continue to publish vulgar plagiarism content and sell counterfeit and inferior goods; Alternatively, information asymmetry and data barriers between different platforms can be utilized to engage in cross platform plagiarism and "copycat" behavior, similar to the behavior and lifestyle of abandoning the internet mentioned above.
Similarly, the differences in the legal environment of different countries can also lay the foundation for the spatial selection of algorithm evaders. Some countries may consider vulgar pornographic content as a symbol of sexual freedom and liberation in others; The plagiarism content and "counterfeit" products recognized by certain countries may be considered reasonable use and reference in other countries. It is unreasonable for each other to regard the rules of other countries as unreasonable, which creates the possibility of being cunning for the perpetrators. In fact, the evasion of law in private international law is to take advantage of the differences in the laws of different jurisdictions to implement acts that are not recognized by the laws of one place. A typical example is that same-sex couples choose to marry in areas where same-sex marriage is legalized. Due to the continuous promotion of network sovereignty construction in various countries, regional internet or national domain network is being formed. This means that the integration of online and offline governance is not only the integration of online governance methods and governance logic into offline areas, but also vice versa. Those who want to avoid algorithms have gained another crack in algorithm power.
In summary, the essence of avoiding becoming a governance object is to seek an "outside of law" place for algorithmic power, whether on other platforms or in other countries, in order to avoid algorithmic power between the current platform and the current country. However, with the increasingly unified domestic and international legal systems, this means of circumvention is becoming increasingly difficult to sustain, just like the act of completely cutting off the internet and lifestyle. Therefore, how to continue to evade algorithms under the existing governance system with algorithmic power requires either adjusting behavior or confusing the subject.
2.2 Adjusting to meet governance requirements
The second means of algorithm avoidance is to adjust the behavior to meet the governance requirements, so that although they are within the scope of application of the algorithm, they can still secretly implement the behaviors that the algorithm wants to prohibit, such as false tax declaration and other false information disclosure behaviors in the Traditional economy law, or disguised lending, disguised financing The act of concealing an illegal purpose in a legitimate form, such as a nominal contractual relationship that is inconsistent with the substantive content. The rise of the Internet not only provides auxiliary tools for adjusting such behaviors, such as automated manuscript washing software, other text conversion software, and media conversion software, but also makes such technologies and their impact avoidance behavior widely popular. Under the rule of the law of large numbers, user production is diverse and updated at an extremely fast speed, greatly improving the difficulty and lag of algorithm power governance. As the saying goes, "the road is one foot high, the magic is ten feet high". More specifically, adjusting behavior to meet governance requirements can be further divided into the following two sub types.
One is to directly change the algorithm's input data. Users need to register an account for online activities and provide basic personal information, such as gender, age, location, etc., to provide a foundation for the algorithm's governance based on the aforementioned information; If such data is falsely reported, algorithms such as prohibiting minors from browsing specific content and recommending to specific gender or geographical users can be avoided. Admittedly, the existence of the online real name system and its penalty rules, as well as the identification of the authenticity of users' declaration information through the analysis of ID number, real name mobile phone number and email or Big data, have reduced the amount of false data declaration to the maximum extent, but there are still online spaces that are not covered by the above systems and technologies, such as websites that only require to confirm the age before browsing or provide exit information collection options. Moreover, laws and social norms protecting personal information require restrictions on the collection and analysis of information by the aforementioned administrators. Therefore, false information disclosure is still possible and can be avoided by directly changing the input algorithm data.
The second is to indirectly change the algorithm's input data by changing behavior. This type of algorithm avoidance is the most common, such as avoiding the screening of sensitive vocabulary through methods such as homophones, pinyin, initials, word splitting, and associative words; Copying text through word replacement, word order change, paraphrasing, etc., or changing audio and video through flipping, rotating, cropping, noise, variable speed, and frame rate conversion, or by splicing different types of materials or works from different sources, to avoid algorithmic scrutiny of copyright infringement; Cultivate accounts by randomly scoring other works, especially niche works, and control comments by manually scoring and winning by quantity to avoid algorithmic scrutiny of false ratings. The above complex evasions all belong to the broad sense of "concealing illegal purposes in a legal form" behavior, which is based on the technical level of the algorithm, or is generated due to the algorithm's prohibition of simple illegal and irregular behaviors, and they continue to play games with each other and are mutually causal. Therefore, improving the level of technology will only lead to the continued upgrading of algorithm avoidance, and even make it difficult to sustain due to touching the boundaries of personal freedom, with limited restrictions on algorithm avoidance behavior.
The essence of the above-mentioned adjustment behavior to meet governance requirements is to "cover up illegal purposes in a legal form", or directly change input data, or indirectly change input data through changing behavior. However, not all data and behavior are easily changed, or if changed, it will partially or completely defeat the original purpose of the evader. Therefore, other means are needed to avoid algorithm power without directly changing the core data and behavior, which is the means of confusing and confusing the governance subject.
2.3 Confusing and confusing governance entities
The ultimate means of algorithm avoidance is to confuse and confuse the governance subject, so that although they directly engage in behaviors prohibited by the algorithm, they can still muddle through and fall outside the attention range of the governance subject. In Communication studies, this strategy is called information flooding, which means that the affected people lose their attention to key issues by piling up a lot of irrelevant, low-quality or false information. The Internet is the perfect tool for implementing information flood strategies. Compared to the past media such as telegraphy, telephone, radio, and television, the Internet has a larger quantity, more forms, faster speed, and wider scope of information dissemination. Moreover, the dissemination method is bidirectional, multi-directional rather than unidirectional, and single. The mixture of one-to-one, one-to-many, and many-to-many dissemination greatly reduces the cost of information dissemination, enhances the flow and sharing of information between different entities, and produces the so-called information explosion. Although algorithmic power is beneficial in dealing with the governance burden of massive content, in specific cases, it may still be deceived or deceived by the flood of information.
For example, those who intend to disseminate obscene and pornographic content, if they code, delete, or create soft pornography according to legal or platform rules, it will violate their original intention of dissemination, and therefore the adjustment space is limited. But he can still combine a small amount of pornographic content with a large amount of harmless or blank content for publication, or perform shading and tone processing on pornographic content to avoid algorithmic censorship of pornographic content. For example, those who intend to spread pirated and plagiarized content can, in addition to adjusting the strategy mentioned above, also use a combination of pirated and plagiarized content with unrelated content or their own original content to reduce repetition rates and avoid algorithmic scrutiny of copyright infringement. Although the aforementioned behavior does not involve complex behavioral adjustments and is relatively simple and low-level for manual review, due to the rigidity of algorithmic power and the weak response of manual review to massive content, there is still a wide range of application space, and there is also a phenomenon of game upgrading, reflecting the least "intelligent" aspect of "artificial intelligence" governance.
In summary, the essence of confusing and confusing governance entities is an information flood. By combining illegal information with legitimate information to avoid algorithmic censorship, it is relatively simple and low-level, reflecting the lack of elasticity of artificial intelligence power. At this point in the writing, the main strategies for algorithm avoidance have been summarized. The further question is: why can algorithm evaders avoid becoming governance objects, adjust their behavior to meet governance requirements, and confuse and confuse governance entities? The next section will analyze from three aspects: technology, norms, and society.
3 The inherent causes of algorithm evasion
Exploring the internal causes of algorithm evasion can usually be approached from both subjective and objective perspectives: subjectively, evaders have weak awareness of compliance and do not actively comply with algorithm requirements; Objectively, there are loopholes in the algorithm's governance, which are exploited by evaders. But such clich é s are superficial and do not reveal what the so-called "algorithmic loopholes" are or what the reasons for the so-called "weak compliance awareness" are. More importantly, if it is necessary to assume that the governed are "bad people," then they should not overly focus on or even transform their subjective consciousness. Instead, more efforts should be made in objective institutional construction and technological improvement, in order to change their behavioral incentives and make them compatible with algorithmic requirements. Therefore, this section will analyze the reasons for algorithm avoidance from three aspects: technology, norms, and society.
3.1 Relativity and absoluteness of technological rigidity
The surface reason for algorithm avoidance in technology is that just as the rigidity of code and architecture is only in relative rather than absolute terms, the rigor of algorithm power is only relative to general human intelligence. Therefore, a smarter few people can adjust their behavior to meet governance requirements or confuse and confuse governance entities. For example, firewalls may be able to block many people from roaming online, but a few can still overcome it through technological means such as virtual private networks; The buffer zone may prevent many people from speeding in reality, but a few people can still ignore it with their exceptional driving skills. The same applies to algorithmic power. Algorithms such as content censorship, copyright enforcement, and crime prediction are effective for most people and in most cases, which does not mean that they are always disadvantageous; For the "bad guys" who have ulterior motives and superior skills, they simply increase the cost of avoiding or violating algorithms, forcing them to put in more effort and adopt other methods to achieve their goals, such as more complex content production (homophonic, manuscript washing, puzzle, etc.), targeted action trajectory planning, personal image design, etc.
Admittedly, no technology and its implementation norms can be fully implemented, and the governance goals can be achieved by blocking the illegal and irregular behavior of the majority of people, which is conducive to avoiding the rebound of those under governance. However, the inability to achieve perfect law enforcement also reveals more deeply that the rigidity of algorithmic governance and more general technical governance cannot be engaged in the necessary interest measurement when dealing with marginal governance issues. For example, reassembling the original expression can achieve the point where it is not directly prohibited by copyright law, but fundamentally it is still the act of plagiarizing the original idea of manuscript washing. Is this a fair use of copyright or an infringement of copyright? Are the comments posted by users to cultivate their accounts a legitimate exercise of freedom of speech or an improper behavior that violates platform rules and intentionally circumvents algorithms? These issues are quite challenging for human rulers because they touch the boundary between governance power and individual freedom, making it difficult to delineate the boundaries of behavior that are either black or white. The rigidity of technology pursues precisely this boundary, resulting in great incompatibility. This is precisely the reason why intelligent algorithms are easily circumvented by methods with low levels of intelligence: rigid boundary delineation is too easy to leave a gray area where operations can be carried out, while the existence of discretion has a deterrent effect.
The incompatibility between technological rigidity and interest measurement also blocks the possibility of overcoming this limitation through technological progress. Using machine learning to categorize benefit measurement and clarify the rules within it is a beneficial improvement direction for algorithmic power and a commitment to suppressing power abuse. But this is not enough to eliminate the existence of marginal governance issues and their need for interest measurement: solving old problems, clarifying existing norms, and obstructing some people's avoidance and illegality. It will also create new margins that require new measurements, and there will always be people who want to create regulations to avoid new norms. The emergence of the aforementioned issues of manuscript washing and account keeping is precisely due to the effective governance of algorithmic power over relatively basic evasion and illegal behavior, and effective responses to these issues will not necessarily be the endpoint of algorithmic power development. This is precisely the dynamic process revealed by the game of algorithmic governance and its avoidance, where more perfect law enforcement leads to more perfect avoidance.
3.2 The Gap between Rules and Standards
From the perspective of the distinction between "rules" and "standards" in legal theory, the essence of the above incompatibility is the impossibility of attempting to completely transform standards into rules through algorithms. Rules have one or more specific facts that are clearly defined in advance and the normative consequences of their failure to be implemented; The standard only has many factual factors related to legislative objectives that need to be comprehensively considered in advance, and the determination of its normative consequences is more complex. From this, it can be seen that the rules are clear and clear in advance, although they fully reflect the inherent meaning of the rule of law, it is not only conducive to the behavior subject's compliance with the law, but also conducive to avoiding and breaking the law; Rules limited by their literal meaning are powerless to cover up illegal purposes that violate the substantive purpose while meeting the formal requirements, unless they continue to be de facto converted into standards through "interpretation". The ambiguity and ex post certainty of standards, although they cannot provide precise behavioral guidance, can balance different interests in specific contexts through discretion, suppress the above-mentioned avoidance behavior, and avoid the problem of rules being too wide or too narrow, thus becoming a necessary supplement to rule governance and the rule of law.
The pursuit of algorithmic power is to have clearer and more clear rules, making illegal behavior inviolable. This also exponentially amplifies the advantages and disadvantages of the rules, imposing more restrictions on law enforcement and governance, providing more guidance for avoiding and violating the law, transforming the power relationship between the two parties, and greatly enhancing the power of the law enforcement and governance parties. Therefore, beyond the scope of application beyond the power of rules, discretionary standards still have their necessity and role to exist. In this regard, machine learning has a certain color of machine discretion, without clear algorithmic power rules, but rather integrates multiple factors for decision-making, which seems difficult to avoid. However, except for the problem of black box opacity, machine learning can still discover rules and intentionally avoid them, unless its decisions are completely arbitrary, which will cancel the legitimacy of machine learning and make it completely unregulated. In fact, all standard based decisions can summarize rules and tend to transform them into rules, as is the case with judicial experience induction. Algorithms only promote and accelerate this process. This also highlights the gap between rules and standards.
Therefore, the solution of algorithm transparency and algorithm interpretation adopted to enhance algorithm accountability has a negative impact on further enhancing the gaming ability of algorithm evaders. The algorithm power, which is currently not transparent and has no explanation, has been summarized by those who intend to avoid it. If the governance algorithm is completely transparent and explained to a level that everyone can understand, the difficulty of algorithm avoidance will be greatly reduced. The reality of algorithm avoidance and the negative Externality of algorithm transparency may suggest that the algorithm black box is not an important problem to be solved. After all, whether the operation process is transparent or not, the governance consequences of algorithm power are real, and the consequences can become the starting point of governance or avoidance. However, this does not mean advocating that algorithmic accountability is unnecessary or advocating that "if the law is unknown, the power cannot be measured". It simply believes that recognizing the limitations and side effects of algorithmic transparency and reasonably positioning its auxiliary role in algorithmic accountability is a more desirable approach to algorithmic governance. Therefore, limited algorithmic transparency should be used to assist in algorithmic regulation, ensuring transparency in the operation of power, but avoiding providing too much information to algorithmic evaders and preventing them from creating more complex and advanced avoidance behaviors through artificial or machine learning.
With the development of technology, personalized rules have begun to emerge, and standards and rules have begun to merge. However, this trend only alleviates the contradiction between efficiency and fairness, and even exacerbates the conflict between individual case fairness and universal fairness. This is because the case differentiation determined by technical means is the ultimate refinement of algorithmic power, which can provide more guidance on how each person behaves in specific scenarios. However, it is more difficult to compare the rationality and consistency of different treatments in different scenarios horizontally or vertically, resulting in a lack of systematic and unified governance power. Therefore, the gap from standards to rules still exists, which is only transformed into the gap between personalized rules and Universal generalization rules. The rigid distinction of technology refinement makes it more difficult to bridge this gap through other ways than technology. This is not only a challenge for the governance of algorithmic problems, but also a common challenge for the governance of all problems in future society.
3.3 The diverse and standardized social order is deeply rooted
By further restoring technology and the norms it executes to the social context in which it originates, it can be found that the fundamental reason for algorithmic avoidance lies in the deep-rooted diversity of normative social order. There are always other norms that hinder the implementation of algorithmic power, preventing its full impact. This is particularly true for avoidance measures that avoid becoming objects of governance. Due to the reality of social clustering and stratification, a diverse normative order is inevitable, and the differentiation of norms across different platforms and countries is only one example. Moreover, due to the enhancement of strong and weak relationships by the Internet, the unprecedented richness of expression channels, and even the continuous intensification of the information cocoon, social clustering, stratification, and even division, polarization are becoming increasingly severe, and unified normative consensus is becoming increasingly scarce. Therefore, even if we attempt to implement specific norms, including formal laws, using relatively rigid algorithmic techniques, we will still encounter strong impacts from opposing norms, presenting a clear pattern of normative competition.
Because of this, algorithmic avoidance does not necessarily have legitimacy, it is simply a difference in the choice of evaders towards different norms and orders, even if the norms executed by the evaded algorithm are legal. The phenomenon of algorithmic evasion is not necessarily a problem that needs to be solved, or the solution to this problem is not necessarily to align everything with formal laws or regulations in books. It is also necessary to recognize the diversity of norms and allow for normative competition. Algorithm evaders are not necessarily bad people or law blind, they just have their own interests and value preferences, and even proficient in algorithms and the norms behind them. It should be acknowledged that the specifications supported by algorithms still play a role in algorithm avoidance, serving as the background, starting point, and constraint conditions for algorithm avoidance; Without the existence of the aforementioned norms, algorithmic evasion would not take a specific form, just like "bargaining under the shadow of the law". Therefore, algorithm avoidance not only reveals the existence of the gap between the expected purpose and actual effect of the algorithm, but also reflects the multiple impacts of multiple norms and orders on the behavior subject.
Furthermore, from the perspective of diverse norms and orders, algorithms are only a factor that affects the law and normative order, and they themselves do not alter the independent power of society, which is their most critical limitation. Although laws and regulations supported by algorithms are more efficient and rigid, strengthening governance cannot solve fundamental social problems and completely overwhelm opposing social norms. As long as social contradictions still exist, evasive and even illegal behavior will not disappear, but will only transform forms, shift positions, and even shift towards more extreme and violent forms. Innovation and innovation never stop, nor are they hindered by perfect law enforcement, but only change their appearance, requiring society to pay different costs, whether high or low.
Ignoring accuracy, reckless abuse of algorithmic power, and even algorithmic dictatorship, making it impossible. Attempting to classify all the gray areas reflected in avoidance as illegal and irregular, and incorporate them into the scope of algorithmic power, thus eliminating the measurement of benefits for marginal governance issues, is a highly tempting and harmful governance path. This approach may not only harm individual rights and freedoms, but also have a negative impact on algorithmic power itself, as algorithms that excessively violate other social norms will lose their legitimacy and require a greater investment of resources than daily governance. The output of governance effectiveness will be weakened by the persistent resistance of the governed. For example, imposing excessive restrictions on comments in order to combat user account cultivation can ultimately result in users leaving other platforms; For example, imposing excessive restrictions on the fair use of copyright in order to combat copywriting may not result in the complete depletion of public creativity, but rather in the abandonment of the entire copyright protection system. Although algorithmic dictatorship is terrifying, the widespread anomie is also terrifying.
In summary, algorithmic avoidance exists because the rigidity of technology is limited and does not match the measurement of benefits required to handle marginal governance issues. It cannot convert all "standards" into "rules" and cannot solve the fundamental social contradictions behind the diverse normative order. This evasion cannot be completely solved through technological progress, and is more likely to become increasingly severe due to regulatory measures such as algorithm transparency. Therefore, given the long-term existence of algorithm avoidance, it is necessary to analyze the consequences of algorithm avoidance and how this phenomenon should be viewed before proposing a technical, normative, or social solution that addresses the aforementioned causes.
4 The Consequences and Functions of Algorithm Avoidance
Generally speaking, after raising and analyzing problems, one should enter the stage of problem-solving and explore effective ways to deal with algorithmic evasion. However, the perspective of sociology of law is not so positive and practical, rather more ruthless. Perhaps not all problems exist or should have solutions. Algorithm avoidance, as a game in direct opposition to algorithm power, simple problem-solving strategies may have little effect, and may even lead to further escalation of avoidance.
More importantly, the consequences of avoidance may not be as severe as imagined, or avoidance itself may also have a "function" in the sociological sense of law, which has practical rationality, so it is not necessarily a problem. What if the algorithm is circumvented? Why break the existing balance when the new one is not yet known? Therefore, this section will attempt to gain a deeper understanding of the consequences or functions of algorithm avoidance, laying the foundation for cautiously proposing some inclusive coping strategies in another subsequent section.
4.1 The actual negative consequences of algorithm avoidance
Generally speaking, firstly, algorithmic evasion means that the desired goals of algorithmic power have not been achieved or fully achieved, reducing the efficiency of algorithmic implementation. For example, content governance algorithms intended to reduce various illegal and irregular content, creating a clear network space, but evaders still publish and disseminate such content, which leads to the failure of governance goals and may have to return to traditional, less efficient manual censorship. Secondly, algorithm avoidance leads to some people being able to obtain more benefits than others, or to avoid the adverse effects of algorithms more, leading to unfair implementation of algorithms and even strengthening the existing unequal structure of society. For example, those who have the ability to choose platforms or countries to evade algorithms often have higher levels of knowledge and stronger economic capabilities. After evading algorithms, their information and resources are often improved compared to those who have not avoided them, leading to more intense differentiation. Finally, during the process of algorithmic evasion, the continuous game between algorithmic power and evaders may lead to systematic waste of social resources and a decrease in overall social welfare. For example, the continuous game between "network water army" and scoring algorithms may not significantly improve the rationality of scoring algorithms, but may result in a negative overall "cost benefit" analysis due to the huge investment in the field of anti brushing.
However, the above analysis can only demonstrate that algorithm avoidance has the potential to reduce efficiency, cause injustice, and lead to social resource waste, and cannot reflect the actual scope and severity of negative consequences. Since no laws or regulations can be fully enforced even under the empowerment of technological powers such as algorithms, and there is always a gap between "law in books" and "law in action", the difference in effectiveness between different laws or regulations is only determined by the proportion of violations and the severity of consequences. Therefore, how many people evade algorithms and how many people obey algorithms depend on whether algorithm power is effective Algorithm avoidance is not a prerequisite for solving a problem. This can be explained using the theoretical tool of the deterrence curve.
The so-called deterrence curve is a curve that takes the probability of law enforcement or the consequences of punishment as the horizontal axis and the deterrence effect as the vertical axis to obtain a positive correlation. The essence of its system curve rather than straight line suggests the nonlinearity of causality, and as a convex curve rather than a concave curve, it indicates that the marginal effect is decreasing rather than increasing. In other words, increasing the probability of law enforcement or aggravating the consequences of punishment can bring less and less deterrent effect, and even approach zero after reaching a certain value. The empirical significance of this characteristic lies in that many people are not influenced by deterrence, and their inherent preferences cannot be changed by deterrence. They are either extreme law-abiding individuals (such as ordinary people who never imagine killing or stealing goods), or extreme violators (such as terrorists and anti social elements who will never be punished and stopped). The reason behind it lies in the support or negation of the law by social norms and personal inner norms. Therefore, the discussion of deterrence issues needs to focus on those who can be deterred, and through the adjustment of independent variables, change their behavior marginally, otherwise it will lead to the waste of law enforcement resources.
Due to the fact that algorithmic evasion targets laws or regulations implemented through algorithmic power, it can also be analyzed through the deterrence curve mentioned above. Due to the different deterrence curves corresponding to the avoidance problem of different algorithms, this article takes the platform content governance algorithm as an example to illustrate, and then expands the conclusions drawn. The content governance of current internet platforms is relatively complete, and the relevant platform rules are relatively sound. With the assistance of algorithms, they have been effectively implemented, successfully addressing many network content issues, maintaining a good order of network content, meeting legal requirements and user needs, and even posing a risk of platform algorithm power being too powerful and potentially abused. Therefore, the deterrence curve related to platform content governance algorithms has become relatively flat.
The flatness of this deterrent curve is reflected in experience as many people, even the majority of people, publish ordinary and harmless online content, and do not often consider how to express themselves to the maximum extent possible. Due to the constraints of social norms or personal inner norms, they dare not even consider publishing content that goes too far. Even if some vulgar content is favored by them, they will choose more strategies to adjust their behavior to meet governance requirements under the deterrence of platform rules and even laws, rather than blatantly implementing illegal and irregular behaviors through platform selection or using confusing methods. This still enables governance objectives to be achieved under relatively loose standards, and balances network governance and personal freedom. Only a few malicious individuals with ulterior motives and superior skills will think all day about how to publish cross-border content. However, as the governance level of algorithms and laws continues to improve, those who still rack their brains to avoid content governance algorithms are among the few who are at the end of the deterrence curve and belong to the extreme who cannot be deterred. If law enforcement resources are concentrated at this end, it may not seem economically rational from the perspective of "cost benefit" analysis. Therefore, it is unrealistic and unnecessary to excessively exaggerate the negative consequences of algorithm avoidance.
Statistical data can also support the relatively small position of algorithm avoidance in the field of platform content governance. Take the Transparency report released by Facebook as an example. Since 2018, the proportion of content that violates Facebook's regulations on adult nudity and sexual behavior in the total content of the platform has reached a maximum of only 0.14%, which means that only about 14 out of 10000 content violate the regulations, and at a minimum, only 0.02%. Indeed, even a lower proportion of Facebook's total content can be magnified to a huge absolute quantity, and pornographic content is only one type of illegal content, in addition to many other types of illegal content. However, the total number of users on the platform and the content that users can actually access due to the limitations of the information cocoon cannot be ignored, so the probability of a single user encountering illegal content is very low. Therefore, this data further supports the theoretical analysis and empirical observation mentioned above.
Therefore, as long as there is still a relatively stable social order, the negative consequences of algorithm avoidance on efficiency, fairness, and social resource allocation will not be as severe as imagined. Most people are still governed by algorithm power and are increasingly controlled due to the continuous upgrading of algorithms. The other side of higher level algorithm avoidance is that a large number of lower level algorithm avoidance and illegal violations have been prevented or punished, which is a huge improvement in the efficiency of algorithm implementation of regulations. Furthermore, through cognitive manipulation methods such as information cocoons and information floods, most people not only choose not to violate or avoid algorithms based on the "cost benefit" calculation, but are more likely to have no awareness or psychological options to violate or avoid algorithms from the beginning, and instead assume the rationality of algorithm regulations, thus achieving the unity of law enforcement and compliance. Therefore, the reflection and criticism of algorithm power does not mean supporting the legal and technological views of Nihilism. Overamplifying this issue is not only unnecessary, but also more likely to reduce vigilance towards algorithmic power due to overestimating human subjective initiative, hindering the establishment of preventing power loss as a more urgent algorithmic governance issue.
4.2 Implicit Positive Function of Algorithm Avoidance
More importantly, algorithm avoidance also plays a role in limiting algorithm power and combating algorithm abuse. The current algorithmic regulatory schemes have their own limitations. That is to say, the transparency of the algorithm process may lack feasibility due to the complexity of the technical process and the obstacles of the user, or may lack desirability due to conflicts with private and public interests; The fairness and impartiality of the algorithm results and the compliance and cleanliness of the data used in the algorithm are not enough to dispel all doubts about artificial intelligence analysis and decision-making. Therefore, an appropriate improvement plan, in addition to organically combining the aforementioned approaches, may also be helpful in using algorithms to avoid the third-party control of the governed from the perspective of multiple normative orders. For example, if users think that the content governance algorithm is too strict, or feel that they are "hacked" by Big data and lack transparency or fairness, they can not only appeal to higher level power subjects, but also actively resist by the aforementioned circumvention strategy, publish content that is not allowed by the algorithm, hinder the realization of the algorithm's purpose, and self remedy their rights. Compared to the uncertain outcome and slow process of the appeal route, direct avoidance is obviously more efficient and can also raise the awareness of the algorithm power subject, promoting the continuous improvement of algorithm power.
More generally speaking, the power limiting effect of algorithm avoidance indicates that there are still gaps in the implementation of algorithm power, and individual freedom has not been excessively restricted. This can alleviate the rebound of the governed and is conducive to maintaining social stability. This is a deeper implicit function that the algorithm avoids. It should be noted that even standardized execution that is both efficient and compliant, with transparent processes and fair outcomes, is an obstacle to individual freedom and may trigger individual confrontation. Algorithm power can greatly improve the efficiency of standardized execution, and thus inevitably impose greater constraints on individual freedom, making it difficult for people to speak freely and come and go freely, hindering potential disruptive innovation and reform. However, the existence of algorithmic avoidance alleviates this tense relationship. The less efficient implementation of norms (i.e., the reduction of their direct effects) is actually more conducive to maximizing indirect social effects in some cases: the "illegal rise" of new power may be regained, and the boundary between governance and freedom is still in a healthy state of change and dialogue, rather than being forced to engage in intense struggles of either kind. For example, by using homophones, pinyin, initials, disyllabic words, associative words and other words to avoid content review, network users express and present many content that may not really be intended to be restricted by laws and regulations, but for various reasons do not give a clear exemption, so that the original overly broad content restrictions can be refined and narrowed, and the rough legal provisions can be implemented into operational network Display rules that determine whether to speak or not in specific scenarios.
If algorithmic power is too strong, even if algorithmic power can be manipulated through cognitive manipulation to unify law enforcement and compliance, there is still a possibility that a few people will awaken and rebound, and the rebound will be more intense. This is because the deep-rooted diversity of normative order mentioned above is not something that technology can completely change. Imagine if content governance algorithms were to be developed to the extreme, without any violations or evasions that could go unpunished, or even be silenced, would cyberspace become a stagnant pool? Not really. A few people will still find ways to break through structural control, start anew, open up new speech spaces, and even shift speech expression from online to offline, relying on more primitive ways to voice and attract people, and spread ideas. For example, due to the overly strict copyright system, which excessively protects the interests of copyright owners and hinders the innovation of later authors, some have advocated the establishment of a copyright open system that is in direct opposition to the copyright system. For another example, the current hot topic of Metaverse is actually originated from Facebook's attempt to get rid of the constraints at the operating system level and directly reach and manage users, which also shows that it is not impossible to establish a new control architecture. Moreover, the enhancement of the Internet to a diverse and standardized social order will provide more alternative social space and forces. Empowerment is not only targeted at any party in the game, but also more detrimental to public entities attempting to control the cyberspace. The trend of the deterrence curve will flatten out more quickly.
From this, it can be seen that accommodating a certain proportion of algorithm avoidance is beneficial for limiting algorithm power or regulating the implementation efficiency of algorithm power, appropriately limiting its scope of application, balancing governance and freedom, promoting social stability and comprehensive efficiency of governance. When relatively flexible, it still leaves room for a few capable individuals to avoid the consequences of perfect law enforcement when most people do not choose the option of high cost breakthrough architecture obstacles, To lay the necessary foundation for the full extension of personal freedom, the positive interaction between law and social norms, and possible reforms and innovations.
5 Classification regulations for algorithm evasion
Recognizing the overall effectiveness of algorithmic power and even the implicit positive function of algorithmic evasion does not mean advocating for inaction and laissez faire in algorithmic avoidance. Otherwise, the power relationship will grow and fall, and the negative consequences will become serious. Due to the limitations of some algorithms that only utilize the information and resources of the governance, without touching the boundaries of technology and rules, and without involving difficult and diverse normative competition issues, the damage to efficiency, fairness, and social welfare is more prominent, and the support for algorithm power limitations is relatively secondary. Therefore, governance should be carried out through the enhancement of technical capabilities and the improvement of rule execution. Although other algorithms do touch on the measurement of marginal benefits and diversified normative competition, and their negative consequences and positive functions are relatively uncertain, and even the benefits may exceed the costs, governance should still be carried out through non legal and non technical flexible methods, and communication and mutual understanding with the opposite should be maintained. This distinction can be summarized as a classic dichotomy between simple cases and difficult cases (also known as difficult or complex cases), with the difference being whether there are clear rules to apply. In addition, the social normative governance approach for difficult cases is not only conducive to avoiding the "bad laws created by difficult cases", but also conforms to the principle of inclusive and prudent governance for emerging internet issues.
5.1 Technical Enhancements to Rule Execution
The relatively simple ones in algorithm avoidance strategies include avoiding platform choices in governance objects, adjusting false declarations to meet governance requirements, simple behavior adjustments, and confusing governance entities. There are clear rules to apply to all four, and the only thing needed is to improve the level of implementation of the rules. Technology can play the largest role in this, and therefore there are few problems that are too rigid or not transparent. The labor cost of Assistive technology is not high.
In terms of platform selection, although "illegal rise" is an important means of competition between small emerging platforms and large monopolistic platforms, the Internet is no longer an illegal place, and no platform can exceed the legal bottom line and provide shelter for illegal and irregular behavior. For example, platforms should use various means, including algorithms, to block and punish obscene and pornographic content and counterfeit goods. The use of data and information barriers between platforms to engage in clear cross platform violations should also be prohibited, which should be regulated through national level law enforcement or industry level self-discipline beyond the platform, such as the "Qinglang" and "Jingwang" series actions jointly organized by ministries and commissions, industry reviews, rewards and punishments initiated by online social organizations such as the Internet Society of China, etc.
In terms of false declaration, privacy and personal information protection cannot be used as a reason to evade mandatory legal provisions by disclosing false information. On the one hand, although the law restricts the collection, processing, storage, and dissemination of personal information by personal information processors, it is not completely prohibited, and even requires them to collect the minimum necessary personal information for service or law enforcement purposes. On the other hand, the architecture of anonymity in the foreground and real name in the background can balance privacy protection and network governance, and Big data analysis can also reduce or even eliminate the concealment of necessary identity information through false declaration. Therefore, the combination of law and technology requires everyone to be responsible for their actions and words in the cyberspace.
In terms of simple behavior adjustment, because its essential violations are easy to identify, we should constantly improve the accuracy of Big data analysis and artificial intelligence identification on the basis of the information foundation laid down by the blockchain access certificate and the implementation laid down by the network architecture control, effectively identify violations caused by algorithm avoidance, and supplement it with necessary manual review, We cannot give up regulating the simple simply because of the existence of complex behavioral adjustments. Moreover, since such evasion does not touch the boundaries of rules, the governance results of black box technology are more easily recognized by human rationality. The issues of algorithm transparency and trust are relatively minor, which can maximize the advantages of accurate and efficient technological governance while mitigating the negative impact of its limitations.
As far as the subject of confusing and confusing governance is concerned, because the evaders are also engaged in direct violations of laws and regulations, and only use the flood of information to cover up, there is also no Legitimation. We can follow the similar regulatory methods mentioned above, improve the accuracy of data analysis and intelligent recognition through manual review assistance and machine learning technology, and avoid artificial intelligence being blinded by unintelligent and simple processing. In fact, with the continuous development of content governance, techniques for confusion and confusion through integration with blank content or handling changes in brightness and tone have become increasingly familiar to algorithms. Therefore, confusion and confusion will become increasingly difficult, and will also shift to complex behavioral adjustments. For such evasion, the role of technology and rules is limited, and more social norms need to be utilized.
5.2 Governance application of social norms
The relatively difficult, difficult, and complex aspects of algorithm avoidance strategies mainly involve adjusting complex behaviors to meet governance requirements. Once again, the so-called "complexity" here is not in the purely technical sense (such as various transformations and splicing methods of cross media manuscript washing), but rather, even if the technology used is simple or does not use any technology, it touches on the measurement of marginal interests and the competition of multiple norms, making it extremely difficult to determine the legal nature. For example, various transformation methods to avoid sensitive word censorship, soft pornography, free translation style manuscript washing, and other human intelligence are still difficult to judge The problem of artificial intelligence being more difficult to determine. Therefore, it is necessary to maintain regulatory tolerance and govern through social norms, which is more conducive to the realization of the goals of relevant regulations such as maintaining political order, maintaining Public policy doctrine, and encouraging creative innovation.
Specifically, firstly, the platform should continue to utilize methods such as user reporting, rating, self-organization, and management to fully leverage the self governance role of individuals and organizations outside the platform's direct governance system. This not only organically integrates public information, knowledge, and social norms supported by it into algorithmic governance, which is beneficial for measuring the benefits of marginal governance issues, but also strengthens user recognition of algorithms, promotes voluntary compliance, and ultimately internalizes algorithm requirements into unconscious behavior, which is conducive to improving the legitimacy of algorithmic governance and reducing the cost of algorithmic governance. The highly unified implementation and compliance of this standard, as well as the mutual coordination of laws, platform rules, and social norms, is precisely the meaning of the so-called "network ecological governance".
Secondly, industry associations should continue to improve the formulation and implementation of industry self-discipline norms. On the one hand, it is necessary to summarize industry practical experience and refine the judgment rules for various violations that are not clearly stipulated by law, providing guidance for the governance of algorithm power entities, and reducing the space for algorithm evaders to freely adjust their behavior. On the other hand, appropriate incentives or suppression should be provided to violators and related direct and indirect responsible individuals through industry evaluation and rewards and punishments, in order to reduce algorithmic avoidance behaviors that do harm social welfare. However, whether it is the formulation or implementation of self-discipline norms, an open governance attitude should always be maintained, timely adaptation to changes in the internet environment, and continuous adjustments should be made in practice.
Finally, the government should continue to promote and improve the social credit system, extending it from the economic field to the broader field of social governance. In this regard, based on the online real name system, credit records can be established for each network user. Those who violate laws and regulations that have been reported by other users and confirmed by the administrator will be deducted points according to the severity of their circumstances and consequences, and corresponding punishment will be imposed according to the range of low scores. At the same time, for some users who have a wide impact and have the nature of public figures, it is possible to consider setting up a public user comment area to display user opinions that cannot be fully reflected by pure credit scores, providing sufficient information reference for other users' evaluations, and thus forming a virtuous market feedback cycle.
However, it should be noted that due to the selection of certain platforms, false declarations, simple behavioral adjustments, and confusion of strategies that confuse governance entities, the norms of algorithm execution are also controversial or problematic laws, platform rules, or social norms. However, the right and wrong cannot be determined based on unilateral judgment in a short period of time, leaving room for diversified normative competition and maintaining regulatory tolerance. For example, choosing other platforms to avoid algorithms that discriminate against consumers or exploit workers, or falsely reporting data to avoid algorithms that discriminate against elderly people, women, or users in specific regions, as well as other confusing or adjusting behaviors to avoid unreasonable norms and related algorithms. At this point, what needs to be governed is often not algorithm avoidance, but the algorithm itself being avoided; Algorithm avoidance, as an exercise of personal freedom, has a high degree of legitimacy and plays the implicit function of directly restricting algorithm power, combating algorithm abuse, and promoting algorithm improvement.
Similarly, avoiding becoming a state choice among governance targets also involves disputes over different legal norms, making it difficult to determine which is right or wrong. Algorithm avoidance has a deep implicit function of mitigating the backlash of the governed and maintaining social stability in such situations, such as the sexual freedom struggle in vulgar pornography governance or the information freedom struggle in plagiarism governance. However, due to the fact that different legal choices are supported by their own national coercive forces, the governance results are closer to strengthening regulations for simple and low-level algorithm avoidance. However, due to the high controversy, there is always a mutual game between avoidance and anti avoidance.
This article always adheres to the perspective of sociology of law in the study of evasion of algorithm. It is not a priori claim that algorithm evasion is harmful or should be regulated, but an empirical analysis of the three strategies, multiple reasons, actual consequences, and implicit functions of algorithm avoidance, and based on this, a brief regulatory plan is proposed. In fact, this empirical approach to sociology of law is applicable to all legal research and practical regulations, but it is particularly worth emphasizing in emerging fields, especially in the field of internet governance. Emerging internet governance issues often use specificity as a reason or excuse to demand separate regulations or non regulations, thus concealing the essence of their new bottled old liquor. But just as algorithmic power is nothing more than the empowerment of multiple normative technologies by algorithms, and algorithmic avoidance is nothing more than the resistance of user power to algorithmic power, from an empirical perspective, the original appearance of new problems on the Internet can be restored, and even enter the established path of legal doctrine of old bottles of new wine. Therefore, in this sense, the empirical approach has also become the greatest common divisor for the combination of legal doctrine and social science law, enabling traditional law and other disciplines to communicate and coordinate in specific observation and analysis, and then jointly propose comprehensive solutions. And this is also the development path of more general research and regulation that this article aims to reveal by avoiding this small incision through algorithms.