Location : Home > Resource > Paper > Empirical Research
Resource
Xu Ke, Cheng Hua | Algorithm Paradox and Institutional Response: An Empirical Study Based on User Algorithm Application Perception
2023-07-12 [author] Xu K, Cheng H preview:

[author]Xu K, Cheng H

[content]

Algorithm Paradox and Institutional Response: An Empirical Study Based on User Algorithm Application Perception



Author 1: Xu Ke, Associate Professor, School of Law, University of International Business and Economics

Author 2: Cheng Hua, Associate Professor, School of Economics, Renmin University of China


Abstract: Although the basic framework of algorithmic governance in China has been preliminarily formed, relevant empirical research is still weak. A large-scale questionnaire survey and result analysis on user perception of algorithm applications in China reflect the high level of public concern for algorithm risks, as well as the differences in algorithm concerns among different groups, the contradictions between algorithm concerns and behavior within the same group, and the trade-offs between algorithm recommendation convenience and algorithm concerns. Faced with various manifestations of the "algorithm paradox", institutional responses should be given based on its internal mechanisms, embedding "algorithm for goodness" into "business processes", moving from "algorithm transparency" to "algorithm literacy", implementing "algorithm fairness" through "process fairness", and using "algorithm accountability" to create "algorithm security", ultimately forming a Chinese algorithm governance system that meets the real needs of the public and balances multiple goals.


If 2021 is recognized as the first year of algorithm governance in China, with the implementation of the "Guiding Opinions on Strengthening the Comprehensive Governance of Internet Information Service Algorithms" (hereinafter referred to as the "Algorithm Comprehensive Governance Opinions"), the "Regulations on the Management of Algorithm Recommendation in Internet Information Service Algorithms" (hereinafter referred to as the "Algorithm Recommendation Management Measures"), and the promotion of the "Qinglang 2022 Algorithm Comprehensive Governance" special action of the National Cyberspace Administration, 2022 can be said to be the year when China's algorithmic governance took root. Contrary to the flourishing normative research and rulemaking, the systematic observation and collection of empirical facts, which can be used as an object for analysis and inference in algorithmic empirical research, is still relatively unpopular. Therefore, starting from the theory that "empirical facts can serve as normative reasons", this article adopts a questionnaire survey method to describe the specific perception of Chinese users towards enterprise algorithm applications, discover their real concerns and action difficulties, explore their internal principles, and hope to contribute to the reflection and improvement of China's algorithm governance system.


1 User Algorithm Application Perception Survey: Design and Results

1.1 Review of Empirical Research on User Algorithm Application Perception

The existing surveys on user perception of algorithm applications in China mostly focus on specific scenarios such as "differentiated pricing" and "information cocoon houses". For example, the survey of 2008 respondents by the Social Survey Center of China Youth Daily showed that 51.3% of respondents had experienced Internet enterprises using Big data to "kill" their friends, and 59.1% of respondents hoped that the price authorities would legislate to regulate discriminatory pricing behavior. Wu Zhiyan and others found that algorithmic price discrimination leads to a decrease in perceived price fairness and an increase in betrayal behavior among users based on the recruitment and questionnaire of 310 Meituan platform users. Yu Guoming and Fang Keren used data from the 2019 National Media Contact and Use and Media Values Survey and found that algorithms did not lead to an information cocoon, but rather provided individuals with a diverse and rational information world. After the introduction of the Administrative Measures for Algorithm Recommendation, Guangming Daily, together with the research team of Minzu University of China, conducted a questionnaire survey on users' perception of algorithm recommendation, and put forward suggestions for improving algorithm literacy.

Unlike empirical research in China that focuses on "algorithm recommendation", Western scholars mostly focus on the impact of "algorithm decision-making" on users. In 2018, Pew Research Center conducted a survey on user acceptance in typical scenarios of algorithmic decision-making such as Big data credit scoring, crime risk assessment, resume screening, interview evaluation, etc. A series of other studies further indicate that users' attitudes towards algorithmic decision-making vary greatly depending on the outcome, nature, and domain.

Looking back at domestic and international research, there is little universal research on user algorithm perception, and there is also a lack of overall grasp of algorithm governance system. Against the backdrop of the initial and continuous improvement of China's algorithmic legal framework, it is particularly urgent and important to conduct empirical research on algorithmic perception based on China's practice and problem orientation. On the one hand, if the legal system is aimed at ensuring the well-being of the people, then the true perception of algorithm applications by users is the actual foundation of "touchable and perceptible laws"; On the other hand, measuring legal effectiveness is a prerequisite task for evaluating the effectiveness of algorithmic systems. Systematic and diachronic observation of public algorithmic perception will become the best indicator for judging the degree of completion of legal preset goals, which will be beneficial for the improvement of algorithmic norms in China.


1.2 Questionnaire Design: Theory, Structure, and Questions

As a response to algorithmic laws, the questionnaire starts from the value stance, legal principles, and regulatory guidance of algorithmic governance in China, and takes the "Algorithm Transparency, Algorithm Security, Algorithm Fairness, and Algorithm Goodness" established in the "Opinions on Comprehensive Governance of Algorithms" and "Algorithm Recommendation Management Measures" as the guide. The questionnaire sets a total of 27 questions based on the latitude of "User Cognition User Attitude User Rights User Action". In addition, in order to more accurately depict the perception of different groups towards the algorithm, the questionnaire also designed six questions on personal basic information, including gender, age, occupation, region, education level, occupation, and annual income.

Algorithm transparency is a globally recognized primary algorithm value. A review of global algorithmic governance documents found that 73 out of 84 documents support the "transparency principle". Article 13 of China's "Opinions on Comprehensive Governance of Algorithms" also regards "promoting the openness and transparency of algorithms" as an important legislative goal, urging enterprises to promptly, reasonably, and effectively disclose information such as the basic principles, optimization objectives, and decision-making standards of algorithms, provide explanations of algorithm results, facilitate complaint channels, and eliminate social doubts. Based on this, and in combination with Article 24 of the Personal Information Protection Law and relevant provisions of the Algorithm Recommendation Management Measures, The questionnaire was designed to ask, "Do you know that companies use algorithms when providing internet services?" "Do you understand the content and purpose of algorithms used by companies?" "Do you think companies need to explain algorithms to users?" "If companies recommend products to you based on personal characteristics such as gender, preferences, geographical location, transactions, and browsing records, what is your attitude?" When there are objections (such as user star rating, credit score, etc.), do you want the company to manually intervene and re verify Wait for a question.

Algorithm security "is a unique algorithm value with Chinese characteristics. The "Opinions on Comprehensive Governance of Algorithms" aims to establish and improve the governance mechanism and regulatory system for algorithm security, and takes the framework of "algorithm self security, algorithm security controllability, and algorithm application security" to prevent ideological, economic development, and social management risks caused by algorithm abuse, and to prevent the use of algorithms to interfere with social public opinion, suppress competitors, and infringe on the rights and interests of netizens, Maintain the order of cyberspace dissemination, market order, and social order. Based on the "Opinions on Comprehensive Governance of Algorithms" and the "Management Measures for Algorithm Recommendation", What do you think is the risk situation for companies using algorithms to harm user rights? "" Do you think algorithms violate your personal privacy when recommending personalized advertisements, videos, and news to you? "" Which of the following types of companies do you think have algorithms that seriously harm user rights? "" Do you understand the relevant national regulations on algorithm security and the legitimate rights of citizens themselves What do you usually do to avoid the potential adverse effects of algorithms? What do you think can be done to avoid the potential security risks that algorithms may bring.

"Algorithm fairness "is also a widely accepted algorithm value. As a broad concept, algorithm fairness under the "Opinions on Comprehensive Governance of Algorithms" and "Management Measures for Algorithm Recommendation" includes multiple connotations of avoiding individual discrimination, achieving fairness in results, and protecting vulnerable groups. Given that the survey targets general users, the questionnaire mainly examines the public's perception of fairness in algorithm results, especially "differentiated pricing". For this reason, The questionnaire was designed to ask, 'Have you ever experienced companies charging different prices for the same product or service to you and others?' 'How do you feel that companies differentiate pricing?' 'What is your attitude towards companies pricing differently based on income levels, new and old users, frequency of service acceptance, or whether they are members?' 'What type of company do you think differentiation pricing is more severe?' What would you do when you realize that you are facing differentiated pricing treatment.

The "algorithm for goodness" also has distinct Chinese characteristics. Article 28 of the Data Security Law requires that the research and development of new technologies should be conducive to promoting economic and social development and enhancing people's well-being. The "Opinions on Comprehensive Governance of Algorithms" and the "Management Measures for Algorithm Recommendation" also have the legislative vision of "guiding algorithm applications to improve", and further refine it into specific norms such as "algorithms serve information content", "algorithms serve a good life", and "algorithms serve the elderly and minors". Based on this, The questionnaire was designed to ask, "Have you ever been influenced by advertisements pushed by companies to purchase unnecessary products or services?" "How often do you receive bad information (rumors, vulgarity, pornography, etc.)?" "Do you think there are many situations where companies use algorithms to artificially distort information?" "How do you feel about relying on companies to automatically sort for reading or watching?" Which type of enterprise do you think is doing better in terms of personalized design such as' aging friendly 'and' underage model '? "" Will you proactively report bad information to the enterprise? After reporting, will the frequency of bad information decrease? "And other issues.

1.3 Distribution and collection of questionnaires

In December 2021, the survey was distributed through the online questionnaire platform "Questionstar" and "Alipay" applications. A total of 15 million questionnaires were distributed, and 6941 valid questionnaires were received. Although the response rate is relatively low, the participants who responded effectively are widely distributed in terms of gender, age, region, income, education, etc., reflecting the overall diversity of Chinese netizens (see Table 1). The relevant response rate is not directly related to "non response bias". More importantly, considering that this questionnaire is centered around the "population with real perception of the algorithm", the "selective response" of participants and the resulting sample bias in gender, age, and region can be explained by the "weak response sample representativeness" (R indicator), and the statistical results have validity and credibility.


Table 1 Overall situation of survey subjects

1.4 Analysis of survey results

1.4.1 In terms of algorithm transparency
In terms of cognition, users have limited understanding of internet service algorithms. More than half of the respondents are unclear about whether the company uses algorithms, and 60% of the respondents do not understand the content and purpose of the company's use of algorithms. Among them, 16.90% of respondents are completely unclear whether the company uses algorithms, while 39.08% of respondents are not very clear; 20.34% of respondents have no understanding of the content and purpose of using algorithms in enterprises, while 44.64% of respondents have little understanding. This indicates that the majority of respondents have a certain degree of perception of enterprise algorithm applications, but are in a stage of shallow understanding and vague cognition.
In terms of attitude, for algorithmic advertising recommendations, respondents showed an acceptance attitude, with only 7.17% of respondents agreeing to unified advertising push without using algorithms. On the other hand, when asked about using gender, preferences, geographical location, trading and browsing records to recommend products, more people expressed concerns, with 28.99% of people hoping to turn off this feature.

In terms of rights, users have a strong demand for algorithm interpretation rights, with only 7.22% of respondents believing that there is no need for explanation, while the rest believe that different degrees of explanation are needed.

In terms of actions, if users have objections to the algorithm results, over three-quarters (76.48%) of respondents hope for manual intervention, while only 11.55% of respondents believe that manual intervention is not as accurate as the algorithm.


1.4.2 Algorithm security aspect

In terms of cognition, nearly 80% of respondents believe that algorithm applications may pose a risk of damaging user rights, 20.28% believe that the risk is high, nearly 60% believe that there is a certain level of risk, and less than 5% believe that there is no risk. The more specific risk perception issue of whether personalized recommendations violate privacy further supports this conclusion, with over 60% of people agreeing and only 12% opposing.
In terms of attitude, there are certain differences in users' perception of algorithmic risks in different internet industries. 55.86% of respondents believe that e-commerce seriously damages user rights, and 47.51%, 44.72%, 44.5%, and 44% of respondents believe that social media, video media, financial institutions, and search platforms have this phenomenon, respectively.

In terms of rights, respondents have limited understanding of algorithm security regulations and their legitimate rights and interests. Only 7.95% of the respondents have personally read relevant laws and regulations, 36.14% have only heard of some content, 25.08% have only heard of names, and up to 30.83% of the respondents have no understanding of China's legislation.

In terms of actions, the respondents showed a strong sense of self-protection and placed high hopes on the proactive management of the enterprise. In order to avoid the adverse effects of algorithms, the proportion of respondents actively managing enterprise data collection requests and rejecting unreasonable requests reached 47.27%, and the proportion of respondents choosing to turn off personalized recommendations also exceeded 45%. Meanwhile, 60.12% of respondents hope to strengthen the evaluation of security algorithms for enterprises, ranking first among all options. In addition, users have also put forward clear proposals such as strengthening the legal responsibility of enterprises to use algorithms, strengthening enterprise supervision, and limiting the scope of enterprise data collection.


1.4.3 Algorithm fairness

In terms of cognition, respondents believe that differential pricing is widespread, with only 22.16% of respondents stating that they have not experienced differential pricing, while 51.46% and 26.38% of respondents occasionally or frequently experience differential pricing, respectively. Among respondents who have experienced differentiated pricing, the most common experience is the situation of inconsistent pricing between new and old customers (59.26%), which includes both "giving back to old customers" activities and "attracting new customers" activities; Secondly, pricing varies based on active customers and membership (56.19% and 53.54%), as well as differentiated pricing based on income levels (34.88%). In addition, users have different perceptions of differentiated pricing in different industries of the Internet. Among them, respondents believe that e-commerce companies often use differentiated pricing (60.96%), while music and entertainment platforms use it at least (22.30%).

In terms of attitude, when faced with differentiated pricing, the overall response of respondents was relatively negative. 55% of respondents expressed concern about differentiated pricing, but more than 30% of users expressed understanding and tolerance. At the same time, users have different attitudes towards different differentiated pricing behaviors. Among them, the most recognized pricing is based on whether they are members, with 69% of respondents agreeing or being neutral; The least recognized is pricing based on income level, with 44% of respondents expressing little or no agreement. However, even the most widely recognized differentiated pricing behavior (based on whether it is member pricing or not), 31% of respondents oppose it, and among the most widely recognized differentiated pricing behavior (based on income), 30% of respondents agree, indicating that the public has not yet reached a consensus on the substantive and reasonable criteria for judging differentiated pricing.
In terms of rights and actions, the vast majority of users will actively safeguard their rights and interests, with over half (55.06%) of people choosing to "reduce or suspend the use of the company's products or services", and 14% of people exposing or complaining about related companies.


1.4.4 Algorithm towards goodness

In terms of cognition, users have a clear perception of negative information, with over half of the respondents receiving negative information during the use of the app (15% receiving it frequently, 37% occasionally, and 32% receiving it relatively infrequently). At the same time, users have a strong perception of the intentional distortion of information caused by the use of algorithms by enterprises. More than 80% of respondents believe that companies use algorithms to artificially distort information in many or more cases. In addition, users believe that the humanized design carried out by enterprises for vulnerable online groups such as the elderly and minors is not satisfactory. In the field of industry segmentation, video media and social media platforms have relatively high ratings, while online travel platforms and search platforms have relatively low ratings. Financial institutions have only been recognized by 14.58% of respondents, and even more so, 35.18% of respondents believe that all industries are difficult to satisfy.

In terms of attitude, users are relatively neutral to "algorithm recommendation information and induced consumption", the problem of algorithm induced excessive consumption is not prominent, and the "information cocoon room" varies from person to person. According to the survey results, 85% of respondents did not or rarely purchased unnecessary products or services under the guidance of recommendation algorithms. What needs to be asked is whether this attitude is unknown because the user's "Bounded rationality" is "routine"? Compared to the remaining 15% of respondents, after analyzing variables such as age, occupation, and income level one by one, no evidence was found that young people, students, and low-income groups were more likely to be induced or fall into misconceptions. In addition, there are significant differences in the perception of respondents in acquiring knowledge and information, with 40% believing that recommendation algorithms help to obtain rich information and save information search time; 25% of people believe that the information recommended by algorithms is relatively homogeneous, which limits the diversity of knowledge acquisition; 18% of people believe that pushing content is too entertaining and can easily lead to addiction; 17% of people have doubts about the effectiveness of algorithm recommendations and believe that it is better to search for information on their own.

In terms of rights and actions, users have a weak awareness of the invasion of harmful information. 53% of respondents stated that they did not report such information when facing it, while two-thirds of those who reported it said that although they reported it, the frequency of the occurrence of such information did not decrease, indicating that enterprises attach low importance to user feedback information and that complaint mechanisms need to be improved.

2 Theoretical Implication: The Proposition of Algorithm Paradox


2.1 Deviation in user's algorithmic cognition, attitude, and behavior

Through logical analysis and cross group research of the survey results, we found a deviation between the public's cognition, attitude, and behavior towards algorithms from the data, which is referred to as the "algorithm paradox" in this article.

Firstly, there is a deviation between users' perception and attitude towards algorithms. As shown in Table 2, 55.98% of users expressed little or no understanding of the algorithm used by the enterprise, and up to 64.98% of users expressed little or no understanding of the purpose of the algorithm used by the enterprise. However, when asked about their attitude towards the algorithm used by the enterprise, a very high proportion of users expressed that the algorithm used by the enterprise would harm user rights, infringe privacy, and manipulate information, indicating a high risk of negative evaluation of the algorithm. Figure 1 shows the structure of the relevant survey results. It can be seen that 79.39% of users believe that the algorithm infringes on user rights, while 60.33% of users believe that the recommended algorithm infringes on their privacy. This means that on the one hand, a considerable proportion of users are not familiar with the behavior of enterprises using algorithms, but on the other hand, they have a clear negative attitude towards algorithm use.

Table 2 Respondents' Basic Cognition of Algorithms


Figure 1 Respondent's Attitude towards Algorithm Risk

Specifically, we conducted further analysis of the questionnaire data and divided users into two groups. One group had a higher awareness of the content and purpose of the algorithm used by the enterprise, while the other group had a lower awareness. We found that the previous group of users had weaker negative attitudes towards risk and privacy infringement than the latter group, meaning that a considerable number of users, although "ignorant" of the algorithm, had preconceived negative evaluations (Table 3).


Table 3 Attitude Differences among Users with Different Perceptions

Secondly, there is a deviation between users' attitudes and behaviors towards algorithms. First, although users think that the algorithm will infringe their rights and interests in attitude, they tend to choose the convenience brought by the Selection algorithm in behavior. About 80% of respondents believe that the use of algorithms by enterprises may harm user rights, infringe personal privacy, or cause human information distortion, but a considerable proportion of users are not averse to using algorithms to obtain services, products, and experiences. For example, only 7% of users approve of pushing the same advertisement to everyone, 1/4 of users approve of pushing with "thousands of people and thousands of faces", and other users also accept the recommendation of companies using personal information classification to some extent. In addition, when asked about the impact of companies using algorithms on individuals, users' actual feelings also showed relatively positive results. 85% of users stated that they would not make excessive purchases due to automatic recommendation, while 40% felt that automatic information push was accurate and beneficial to themselves. Generally speaking, a considerable number of users have negative evaluations of algorithms, but when it comes to specific scenarios, user behavior shows tolerance and acceptance towards recommendation algorithms. Secondly, although users believe that the use of algorithms by enterprises can cause widespread risks, their behavior is relatively passive and lacks the initiative to protect their own rights and master algorithm regulations. The survey shows that 85% of users have been harassed by malicious information, but more than half of them have never proactively reported it.

Finally, in terms of cognitive, attitudinal, and behavioral deviations, elderly people over the age of 60 perform more significantly compared to young people. Young people exhibit higher algorithmic awareness than older people, have a more curious and inclusive attitude towards algorithms, and hold a relatively trusting attitude towards algorithms. Young people are more rational in accepting personalized recommendations and differentiated pricing, and more proactive in protecting their own rights and interests. This fact indicates that the 'digital indigenous people' have high algorithmic literacy, while the 'immigrant generation' has a certain degree of cognitive and behavioral bias.


2.2 Algorithm Paradox: Origin and Types

The paradox of algorithms is not entirely new. In fact, the concept closely related to it - the "privacy paradox" - has long been revealed by people. In 2006, Barnes used the "privacy paradox" for the first time to refer to the difference between teenagers' easy disclosure of personal privacy on social networking sites and adults' concerns about Internet privacy disclosure. Its essence is that different people treat privacy differently. Afterwards, starting from "privacy concerns" and "people's awareness and subjective feelings related to privacy leakage and infringement", people referred to all situations where privacy concerns were inconsistent with privacy related actions as "privacy paradoxes". Research has found that user privacy behavior does not always align with their statements. In specific situations, users often forget or lower the level of privacy concern, and sometimes even casually, without any reason or prevention, disclose privacy. Numerous empirical studies have confirmed this phenomenon. In 2001, Spiekermann simulated the situation of online shopping product consultation, and observed the behavior of users to disclose information through the interaction between Chatbot and users. The research results are surprising. Among the "privacy Fundamentalism", "Pragmatism" and "slightly concerned people" who claim to be extremely concerned about privacy, 24% -28% of "privacy Fundamentalism" voluntarily provided home addresses before interacting with robots, and 30% -40% of "Pragmatism" provided home addresses without taking any privacy protection measures. This is not an isolated case. In 2019, a study by CIGI-Ipsos' Global Internet Security and Trust Survey showed a significant deviation in the proportion of people who are concerned about privacy leakage risks and are willing to share personal information by trusting the internet. So many scholars have bluntly stated that 'people's privacy concerns are not related to their privacy behavior'. In recent years, with the application of mobile internet, user profiling technology, and recommendation algorithms, the "personalization privacy paradox" has become an important form of privacy paradox. In the context of information overload and increasingly scarce attention, users not only enjoy better services brought by personalization, but also worry about privacy infringement; On the contrary, enterprises not only benefit from enhancing competitiveness through personalized services, but also face user churn caused by privacy concerns. In this regard, the "personalization privacy paradox" arises from the tug of war between enterprise personalized services and user privacy.

Based on the above research, we may refer to "algorithm concerns" and "people's awareness and subjective feelings related to algorithmic risks such as discrimination and abuse of algorithms" as "algorithmic paradoxes" and the defense of algorithmic behaviors, and further concretize them into three situations: (1) differences in "algorithm concerns" among different groups; (2) The contradiction between "algorithm attention" and algorithm behavior in the same group; (3) The trade-off between algorithm recommendation convenience and algorithm focus.


2.3 Explanation of the Causes of Algorithm Paradox

2.3.1 "Algorithm focus" as a development preference

Algorithm focus is the acquired perception of technological risks, which is not derived from an individual's innate preference for risk avoidance, but is shaped by life experiences, knowledge structures, habits, and cultural backgrounds. This explains why different populations hold different attitudes towards algorithms. The research on Alipay users' privacy feelings found that privacy concerns may be a preference formed in the process of using digital services. When users gradually generate benefits and fun from using the App to provide services, they also begin to have more concerns about the potential risks of App data collection and sharing. In short, users' privacy concerns may increase as their data accumulates. Applying this theory to algorithms can take the frequency of receiving algorithm services and the level of understanding of the algorithm as variables to explain the differences in "algorithm attention" among different groups.

The research results show that among all age groups, respondents aged 60 and above who have the least exposure to algorithms have the lowest perception of the potential risks that algorithms may bring, with only 12% believing that algorithms pose high risks. In the scenario of aggregating personal information for "differentiated pricing", in response to the problem of using personal feature labels in algorithms, "respondents with a graduate degree or above" are more inclined to constrain corporate behavior, choosing the proportion of "allowing companies to independently choose labels and how they use them". The proportion of high school and below groups is 23%, while the proportion of undergraduate and above education groups is approximately 13%, which is 10 percentage points lower. In response to the potential negative impact of algorithms, highly educated respondents are more proactive and determined, and relatively more organized. As education levels increase, a higher proportion of respondents choose to turn off personalized recommendations instead of browsing enterprise content as diverse as possible, or choose to directly avoid the personalized cognition and calculation of the algorithm on individuals, rather than letting the algorithm learn more about themselves through more browsing. In comparison, among respondents with high school education and below, the highest proportion (16%) did not use any of the listed methods to actively avoid the potential adverse effects of algorithms.

2.3.2 Trapped in "algorithmic behavior" of distorted manipulation

The "Behavior Distortion and Manipulation Theory" attempts to reveal the deviation between algorithm attention and algorithm behavior from the subjective and objective factors that improperly affect algorithm behavior. In terms of external factors, enterprises often use "dark patterns" in application interaction interfaces such as setting menu defaults, mandatory registration, obscure language, and shadow archives to transform the architecture environment of the Internet into weapons, providing users with the illusion of choice, and influencing user decisions through psychological manipulation and disguised forms. In fact, if the user is fully aware and has the ability to choose alternative solutions, they may not make these choices. As a micro form of power, enterprises can enhance their influence and control over users through algorithms, and subtly influence users' choices through rating, classification, and prediction.

From the perspective of internal factors, users' algorithmic behavior will face the dilemma of incomplete/asymmetric information, Bounded rationality and systematic cognitive bias. The so-called "incomplete/asymmetric information" refers to the user's lack of sufficient understanding of the existence and nature of algorithms, making it difficult to accurately determine the magnitude of algorithm risks, and also unclear whether there are other alternative technologies and protective solutions. When users tend to adopt a rather myopic mindset, their actual decisions are distorted. "Bounded rationality" means that users are unable to deal with all the uncertain information related to the cost and benefit of the algorithm, and it is difficult to predict the return of their strategic choices. For this reason, individuals have to resort to "simplified Mental model, approximation strategies and Heuristic" such as intuition, common sense and guess, which leads to "systematic cognitive bias". The research of Behavioral economics shows that optimistic bias, influence enlightenment, hyperbolic discount, frame effect, etc. will lead to wrong judgments. Among them, "optimistic bias" refers to an individual's tendency to show excessive confidence in their algorithm protection skills and knowledge, believing that the algorithm risks they face are relatively small; The term 'influence inspired' refers to individuals who tend to underestimate the risks associated with things they like and overestimate the risks associated with things they don't like. This partially explains the different attitudes of minors towards algorithms. Research shows that compared to other age groups, digital indigenous people have a low perception of algorithmic risks and a relatively open and tolerant attitude towards algorithmic errors. Hyperbolic discounting "means that people evaluate the impact of distant and nearby events in an inconsistent manner. When asked if an individual intends to adopt protective strategies, they may consider algorithmic risks to be more important, but when faced with the benefits of using algorithms, their preferences change and they choose to gain immediate vested interests. The "framing effect" causes people's algorithmic decisions to change due to changes in the way information is presented. Even if the objective risks remain unchanged, as long as the algorithmic risks and benefits are described in different ways, users will make different choices. As a cognitive structure that guides people to perceive and reproduce reality, the widespread criticism of algorithms in the current society constitutes the "basic framework" of algorithm cognition. The unknown universality of algorithms further magnifies the fear of algorithms. In addition, most respondents do not understand the national laws and regulations on algorithm security, and cannot protect their own rights and interests through Legal remedy. They often have to decline the application of algorithms to reduce the risk of algorithms, Fall into a false dilemma of either accepting everything or completely rejecting it.

2.3.3 Algorithm Behavior Based on Cost Benefit

Unlike the hypothesis of "irrational individuals" under the "behavioral distortion and manipulation theory", the "privacy calculus" theory aims to calculate individuals between potential privacy losses and expected privacy gains from the perspective of "maximizing benefits" in the face of individual behavior and feelings deviating. Its ultimate behavior depends on the results of privacy trade-offs. In terms of its losses, including peaceful intrusion, social discrimination, Identity theft, Internet fraud, Human flesh search engine, etc; In terms of its benefits, it includes entertainment, convenience, personalization, self presentation, maintaining social relationships, and acquiring social capital. When profits outweigh losses, people will trade privacy for higher benefits. The distinction between "stated preferences" and "received preferences" regarding privacy further indicates that although people often claim to be concerned about privacy, the behavior in specific scenarios can reveal their true preferences. In fact, people are willing to obtain digital content and services from enterprises through privacy disclosure, as pointed out in the previous section of the 2015 EU Proposal for a Directive Concerning the Supply of the Digital Content, in the digital economy, users' "pay by data" and "pay by money" have the same meaning. Faced with the complex and diverse internet information and products, algorithm recommendations help users reduce search and decision-making costs through "thousands of people and faces" page settings, accurately matched search results, and content push that reflects user preferences.

Moreover, algorithmic differentiated pricing has fast and dynamic characteristics, which can improve product service quality while stably meeting changing market demands. The various conveniences provided by algorithms make users willing to make a choice to accept algorithm services, knowing the risks of the algorithm, after weighing them. This may explain the contradiction between "algorithm recommendation convenience and privacy" in the research results. The user's rejection of algorithms is essentially the rejection of enterprises' excessive use of algorithm behavior, rather than the rejection of algorithm technology and application itself. When the algorithm meets user expectations such as more convenient information collection channels, more accurate information recommendations, and more high-quality information content, users do not resist the algorithm. On the contrary, when algorithms fail to meet the above expectations, people's concerns about algorithm addiction, information cocoon, algorithm infringement, and other misuse of algorithms will naturally become prominent.

3 Institutional Response: Resolution of Algorithm Paradox

3.1 Understanding development preferences: embedding "algorithm for goodness" into "business processes"

As a development preference, "algorithmic attention" increases with the use of services and products. Therefore, enterprises should take precautions and introduce "algorithmic goodness" into various stages of service and product design and use, embedding "goodness" requirements into the system design at the beginning, becoming the default rule for system operation, rather than remedial measures afterwards. This' designed algorithm governance 'follows the design principle of' Human Centered Design '(HCD). HCD advocates placing "people" at the center of any system, starting from the needs, interests, and abilities of users, and evaluating and understanding humans through direct contact with them, in order to provide products and services that are usable, easy to understand, and in line with social values. HCD is essentially an interdisciplinary practice. Therefore, enterprises should first establish an algorithm ethics committee composed of ethics experts, technical experts, legal experts, and public representatives. Before deploying algorithms that may have a significant impact on the enterprise or society, the committee should initiate an ethical review, based on the principle of "algorithm for good", to identify, prevent, and eliminate deviations from basic values caused by relevant applications. On the other hand, HCD also needs to integrate Information science, psychology, Cognitive science, anthropology and other knowledge. It is necessary for enterprises to bring together user experience designers, visual designers, interaction designers and information designers to make them work together to ensure that the rights and interests of users are at the forefront of any design.

The connotation of algorithmic goodness is very broad, and in order to achieve regulatory effectiveness, it should be emphasized. From the survey results, it can be observed that the public strongly reflects on the intentional distortion of information caused by algorithms, the algorithm push of bad information, and the low service level of vulnerable groups on the internet. However, the problem of excessive consumption induced by algorithms is not prominent, and the "information cocoon room" is also different. To this end, regulatory agencies can focus on "improving information content" and "protecting vulnerable groups on the internet", implement laws and regulations on information content governance such as the "Internet Information Service Management Measures" and "Regulations on Ecological Governance of Network Information Content", strengthen the implementation of illegal information content prohibitions, standardize enterprises to further optimize filtering algorithms and recommendation algorithms to prevent and resist bad information, and prohibit algorithm traffic hijacking False registration of accounts, illegal trading of accounts, manipulation of user accounts, and other behaviors that disrupt the order of the network ecosystem. At the same time, enterprises should actively respond to public demands, establish a complaint reporting feedback mechanism, and tailor service products tailored to the cognitive characteristics of minors and the elderly.


3.2 Avoiding distortion manipulation: moving from "algorithm transparency" to "algorithm literacy"
Faced with algorithmic behavior that may fall into distorted manipulation, the first step is to break the "dark mode" of the enterprise. Sunshine is the best disinfectant, and the algorithm transparency mechanism aimed at opening the "algorithm black box" has become the most direct and effective way among various paths. The existing regulatory laws in China provide various tools for algorithm transparency, such as algorithm filing, algorithm auditing, algorithm inspection, and algorithm interpretation. However, in situations where the public's understanding of algorithms is limited, the "algorithm interpretation right" for users has become the key to resolving algorithm paradoxes, implementing algorithm accountability, and achieving algorithm fairness. The research results show that users have strong demands for algorithm interpretation, but there are significant differences among different users. 54.28% of people need a simple explanation of the algorithm, while 38.49% need a detailed explanation. At the same time, when the algorithm processing results cause user objections, more than 3/4 of the respondents hope for manual intervention to correct possible algorithm errors. Therefore, although there is controversy in the academic community regarding the principle of algorithm transparency, regulatory agencies should still adhere to the requirement of algorithm transparency and set hierarchical rules for algorithm transparency from the perspective of users. One is the transparency of the basic information of algorithm services (a simple explanation rule), which means that enterprises should inform users in a significant manner (including but not limited to through algorithm filing systems) of the core algorithm names, application fields, algorithm types, and algorithm purposes directly related to the service. The second is "transparency of the basic principles of algorithm services" (detailed explanation rules), which means that enterprises can inform users of the basic principles and main operating mechanisms of core algorithms directly related to the service in a visible, readable, and understandable manner. The third is "transparency of algorithm service processing results" (comprehensive interpretation rules), that is, when users raise objections to the algorithm decision results, they should inform them of personal information collection and processing, personal characteristic parameter Model selection and its logical relationship with the decision results. The fourth is "manual intervention of algorithm service objections" (user rejection rule), which means that when users raise objections to algorithm decision results, enterprises should manually review the decision results.

Algorithm transparency does not necessarily mean that algorithms are known. Due to issues such as the technological capabilities of the public, complexity of algorithms, machine learning, and disruptive disclosure (information confusion), algorithm transparency may not achieve the goal of improving user rationality. Research on privacy paradox shows that increasing people's understanding of privacy technologies and threats, cultivating scientific awareness of privacy risks, and helping people obtain information on how to protect privacy can effectively reduce privacy paradox behavior. Currently, there is a significant cognitive bias among the public towards algorithms. In the first stage of algorithm application, the public fully enjoys the convenience of the algorithm, but knows nothing about the algorithm; In the second stage of the algorithm application, with the continuous spread of negative information such as privacy violations and Big data killing, the public was strongly worried. However, due to the herding effect, there was also a misconception about the algorithm. The survey shows that the public has not yet established a basic algorithm knowledge structure and a logical understanding of the algorithm, which makes it difficult for users to make the most beneficial choice when facing algorithms. The research further found that the cognition and attitude towards algorithms vary depending on the educational background, growth environment, and income level of users, and there is a significant gap between different groups. Therefore, it is necessary to systematically enhance the public's "algorithmic literacy".

The so-called "algorithmic literacy" refers to "being aware of the use of algorithms in network platforms and services, understanding the working principles of algorithms, being able to critically evaluate algorithm decisions, and possessing skills to cope with or even affect algorithm operations. As a systems engineering project, algorithmic literacy is like a broader 'digital literacy' that requires the joint participation of entities such as the state, enterprises, research institutions, and social organizations. To this end, all parties can work together to strengthen user education, carry out extensive and continuous daily algorithm education, and help users improve their cognitive ability and self-protection ability towards algorithm services.

Specifically, all parties can carry out the following work from three aspects: algorithm attitude, algorithm knowledge, and algorithm skills: (1) Popularize algorithm knowledge. Regulatory agencies can provide relevant and user-friendly rights protection regulations and self-protection measures to the public through specialized websites; Enterprises should comprehensively disclose the positive and negative impacts of algorithms on users based on algorithm transparency. (2) Improve algorithm attitude. Regulatory agencies can regularly publish algorithmic governance cases and law enforcement activities, while enterprises can influence the public's perception, preference, and evaluation of algorithmic benefits and risks through algorithmic application research and algorithmic compliance audit reports. (3) Strengthen algorithm skills. All sectors of society can widely carry out algorithm education in academic education and social education, cultivate people's ability to create and edit text, image and video content with algorithm tools, use algorithm principles to select recommended information, affect the content of algorithm services, protect privacy, personal information, Digital identity, and maintain algorithm rights and resolve algorithm disputes.


3.3 Optimizing Cost Benefit Calculus: Implementing "Algorithm Fairness" through "Process Fairness"

The massive content presentation and accessible data on the Internet greatly enhance the necessity and accuracy of algorithm filtering, matching, and pushing, making algorithm recommendation an important way of information transmission in the digital economy era. The research results also reflect the general recognition of algorithm recommendations among the public. As the theory of "algorithmic calculus" based on cost-benefit has observed, "gain outweighs loss" is the most important form of algorithmic unfairness. In the typical scenario of differential pricing, which may damage the fairness of the algorithm, the research initially revealed the trade-off between the gains and losses of users: differential pricing based on the income level (essentially "users' willingness to pay") is offensive, while differential pricing based on "members or non members" is recognized by many people, As for the differentiated pricing based on the frequency of users receiving services and new and old users (i.e. "Big data killing"), no agreement was reached. Behind this difference is the diversity of differentiated pricing types, as well as the positive and negative impacts on user rights: the consumer surplus of the party charged with the higher price may be transferred to the party charged with the lower price, so that the latter can enjoy services or products that cannot be enjoyed in the unified pricing scenario. From the perspective of social welfare, this helps to reduce the "Deadweight loss" of the "Hubble Triangle", Implement Kaldor Hicks Principle. Therefore, attempting to clarify the boundary between "reasonable" and "unreasonable" differentiated pricing through typology is difficult to reach consensus. On the other hand, the opacity of differentiated pricing rules and the imbalance between information and status are the main reasons for users' perception of price fraud, coercion, and thus strong aversion. Therefore, in the future, regulation may shift from emphasizing "fairness in results" where "algorithms should ensure equal distribution of valuable things among all parties" to "fairness in processes" where "algorithms should treat all participants equally and all parties have equal opportunities, conditions, and rights".

The 'process fairness' of the algorithm first requires enterprises to fully inform their reasons when differential pricing. For example, in the "new acquisition activity" that distinguishes between new and old users, it can be explicitly stated that "first order reduction, new user exclusive discounts", etc; In the "Promotion and Retention" activity that distinguishes the frequency of receiving services, it can be stated that "Dear user, you haven't visited our store for a long time, we will give you a coupon" and so on; In marketing activities that differentiate different groups, it is possible to specify "exclusive prices for students" and "exclusive prices for those over 60 years old". At the same time, "process fairness" should also ensure users' right to "opt out" of differentiated pricing. Research has shown that if users are dissatisfied with differentiated pricing, most of them will protect their rights and interests through methods such as voting with their feet and exposing complaints, thereby exerting strong external constraints on the behavior of the enterprise. On the basis of Article 18 of the Electronic Commerce Law, which regulates personalized recommendations, Article 24 (2) of China's Personal Information Protection Law specifically adds the obligation of enterprises to "provide convenient ways to refuse to individuals", effectively avoiding the substantive unwillingness of "choosing to join the mechanism". Based on this, if users feel that they have been discriminated against by pricing, they can choose to withdraw from the service at any time, thereby incentivizing them to challenge the unreasonable algorithm recommendations of the enterprise.

Broadening the perspective, more accurate, comprehensive, and convenient algorithmic cost benefit calculations can help users form a true "calculated trust", placing individuals and enterprises in the community of interests of the digital economy, based on regulatory incentives rather than regulatory deterrence, effectively bridging the "user enterprise" digital trust relationship that may be reduced due to differential pricing, and ultimately achieving a win-win situation.


3.4 Responding to algorithmic risks: Building algorithmic security through "algorithmic accountability"

Although algorithmic security is the primary regulatory goal established in the "Opinions on Comprehensive Governance of Algorithms", from the perspective of resolving algorithmic paradoxes, algorithmic security should be at the bottom line. Only when measures such as algorithmic goodness, algorithmic transparency, and algorithmic fairness are inadequate, can there be room for application. This is because algorithm abuse may pose a threat to the order of cyberspace dissemination, market order, and social order, but all of these orders have the potential for self repair and improvement. Algorithm security regulations should be maintained to prevent the ubiquitous security from harming the self-development and evolution of existing orders. However, this does not mean that algorithm security is independent of algorithm paradoxes. As revealed by the study of privacy paradox, privacy protection is a massive, complex, and never-ending project. It is not enough to simply empower individuals with the right to know and the right to choose, making them self manage. Regulators must set boundaries for the collection and use of personal information to protect individuals from infringement. The same holds true for algorithmic paradoxes. In addition to ethical oriented algorithms for goodness, rights oriented algorithms for transparency, and procedural oriented algorithms for fairness, accountability oriented algorithm security should also be taken as the regulatory bottom line to respond to public concerns. This precisely confirms the findings of this article's survey, which shows that 58.12% of respondents choose to eliminate algorithm risks by "strengthening the legal responsibility of enterprises in using algorithms".

The requirement of "algorithm accountability" is to be able to hold the algorithm service provider accountable when the algorithm application causes infringement or negative consequences. Based on this, on the one hand, for risks arising from the special nature of the scenario and the importance of legal interests, regulators should view algorithms as tools for infringing or illegal behavior, adopt a "result oriented" and "materialist" approach, and hold them accountable after the harmful consequences occur. On the other hand, accountability is not limited to "relief and punishment after the event". For algorithms that use Reinforcement learning, Unsupervised learning, and deep learning, it is difficult for the law to fully grasp its internal logic and decision-making process through the algorithm model, so it is impossible to identify behavior faults and causality. The expansion from "post event" to "pre event" has become the development direction of algorithmic accountability. For example, in 2022, the United States launched a new version of the "Algorithm Accountability Act (Draft)", with "Algorithm Impact Assessment" as the core content, requiring companies to conduct systematic impact analysis of bias, effectiveness, and related factors when making decisions using algorithms.

In this regard, algorithmic accountability has become a comprehensive mechanism for controlling algorithmic violations. By tracking data collection, feature extraction, algorithmic design, authentication and review mechanisms, and corrective mechanisms after violations occur, we actively prevent potential negative impacts of algorithms. The algorithmic accountability that covers pre, during, and post events requires the joint participation of the market (code), community (norms), and government (law). Among them, code based self-regulation by enterprises in the market is the least costly and most efficient governance approach. The survey also showed that over 60% of people agree with "strengthening the security evaluation of algorithms in enterprises", making it the first choice among various measures. Certification agencies, expert agencies, and other social organizations can help determine disputed facts, identify faults, and causal relationships in law enforcement and judicial processes, and are important components of algorithmic accountability. In addition, the government should actively supervise and hold enterprises accountable for civil, administrative, and criminal responsibilities, and implement effective deterrence.


Epilogue

Legal empirical research has the function of exploring the actual basis of normative argumentation and measuring legal effectiveness. This user perception survey of algorithm application reflects the high concern of the public about algorithm risks, providing a solid factual basis for algorithm legislation and law enforcement in China. On the other hand, it also reveals the "algorithm paradox" between people's algorithm concerns and algorithm behavior. As a concept developed through the 'privacy paradox', the algorithmic paradox has profound theoretical implications. When regulators actively respond to algorithmic risks, it is necessary to treat users' true feelings and desires with caution, follow the inherent laws of the digital economy and algorithmic society, and promote the improvement of China's algorithmic governance system based on algorithmic security, transparency, fairness, and goodness.


Of course, as the first large-scale empirical study on user algorithm perception in China, it may not reflect the full picture of the facts, and more importantly, it is a slice like observation before algorithm regulations have been implemented. Based on this, we will update the questionnaire and continue to conduct a new round of surveys in the near future, with the aim of describing the effectiveness of algorithmic regulations in a time dimension, discovering changes in people's cognition, attitudes, rights, and actions, and thus bridging the gap between what should be and what should be.