[author]SHEN Kui
[content]
Abstract: Soft law’s widespread existence does not mean it is complied with and implemented. The soft law in AI, the AI ethics code, has been proved to have an “effectiveness deficit,” which owes to a variety of features of AI ethics such as unenforceability, abstractness and vagueness, diffusion, overlap, and confusion, and to other reasons like companies having little incentive for compliance, the paradox of compliance in some cases, predicaments disclosed by social system theories, and the deterministic approach to AI development. However, AI ethics still have unique values because of their flexibility and agility, multiplicity and adaptivity, cooperativeness and experimentalism, de facto pressures, and cross-jurisdictional applicability. Empirical studies have demonstrated that AI soft law can be implemented indirectly through a set of mechanisms in terms of organization, pressures for compliance, incentives for compliance, technological methodology, specifications, and interaction of soft and hard law. A more general conclusion shall be drawn that soft law can acquire much effectiveness by combining the value consensus and the economic logic, and by combining the internal reasons and the external impetus as well.
Key Words: soft law, artificial intelligence, ethics, implementation mechanism, effectiveness of soft law
1 Question: How does soft law produce actual effects
The effectiveness or validity of soft law - its nature of "should" be followed and implemented - lies in persuasive binding force, not in compulsory binding force. As long as soft law does not conflict with hard law or its principles and spirit, and roughly conforms to the society's cognition and expectation of better "public good" within a certain range, it has unique effectiveness. Due to differences in the authority of soft law makers, the degree of recognition of "public good", the consultation and communication of the soft law formulation process, etc., the persuasive binding force of soft law varies in strength, but the common point is that the "should" of soft law is not supplemented by compulsory sanctions. In this sense, the effectiveness of soft law should be different from the actual effectiveness of soft law. The former exists in the normative sense, and the latter exists in the factual sense.
However, the conventional definition of soft law itself means that it is effective within a certain range. It is hard to imagine that behavioral rules that have no effectiveness and no hard law attributes can be called "soft law". Therefore, a question that needs to be addressed is how soft law is produced or obtains universal effectiveness. There must be a time interval between the proposal and publication of soft law and its actual effect, no matter how long the interval is. Behavioral rules with the nature of soft law usually do not immediately and immediately reap the effect when they first come into being, unless they simply give the form of rules to common practices that have been widely followed and implemented. Such exceptions are relatively rare. After all, most soft laws are future-oriented, expecting people to follow new behavioral rules or change the original behavioral rules for the sake of better "public good". Although the vitality of soft law comes from its own inherent persuasiveness, it is too idealistic to expect a proposed soft law to evolve into a real soft law based solely on this inherent attribute or inherent reason. Because soft law that points to better "public good" usually requires actors to bear more compliance or application costs. If there is no appropriate and effective mechanism to reduce or offset such costs, then the behavioral choice tendency of seeking benefits and avoiding harm or the motivation of good money to avoid being driven out by bad money will often overwhelm the appeal of the inherent reasons of soft law, making it unable to obtain universal effects. This is the significance of exploring why soft law produces actual effects, in addition to the inherent reasons for soft law to obtain the necessary effect.
Luo Haocai and Song Gongde once pointed out in their masterpiece of domestic soft law, "Soft Law is also Law: Public Governance Calls for Soft Law Governance", that the expression "the implementation of law depends on the state's coercive force" is not accurate. For the implementation of law - that is, the transformation of the effectiveness of law into its actual effect - the state's coercive force is indispensable, but the two are not necessarily related. The implementation of law can be that the actor: (1) habitually obeys because of conformity; (2) voluntarily obeys because of recognition; (3) complies because of incentives; (4) complies because of social pressure such as public opinion; (5) complies because of organizational pressure; (6) complies because of the use or threat of the state's coercive force. Therefore, the way to produce the actual effect of law is diverse, and the implementation mechanism of law mainly includes four ways: voluntary obedience, habitual obedience, social compulsory obedience, and state compulsory obedience. These discussions were conducted by the author in the process of reflecting on and revising the definition of "law", which ultimately points to a new concept of "law" that includes both hard law and soft law. In this concept, the implementation mechanism of law is summarized as "public coercion" and "self-discipline". Undoubtedly, among the six items listed above, except for state-enforced obedience, which only applies to hard law, the rest can be presented in the implementation process of soft law.
However, as far as the issues concerned in this article are concerned, among the above items, perhaps only incentives, social pressure, and organizational pressure are worthy of attention as ways to make soft law effective. Because, conformist obedience is obviously not a mechanism for soft law to go from advocacy to universal compliance, and "conformist" itself means that universal effectiveness already exists. Voluntary obedience is based on the recognition of the inherent reasons of soft law and is a driving force for the implementation of soft law. However, voluntary obedience under hard law conditions, in addition to the intangible benefits of value recognition, at least has the benefit of avoiding state-enforced sanctions. As mentioned above, voluntary obedience under soft law conditions will not only not have the benefit of avoiding sanctions, but may even cause the obedient to pay more costs or prices, so it is difficult to become a powerful mechanism for soft law to produce actual results.
Of course, when discussing "the implementation of the law", Luo Haocai and Song Gongde did not highlight the special attention to the implementation of soft law. The incentives, social pressures, and organizational pressures they mentioned were more theoretical developments on the logic of the implementation of all legal norms (including hard and soft laws), lacking rich examples of soft law practice. More importantly, because the question of how soft law produces actual results has not been mentioned in a prominent and special position, and has not been included in the issues that are consciously to be solved, whether incentives, social pressures, and organizational pressures can summarize all or most of the soft law implementation mechanisms, there will naturally be no clear discussion.
From the perspective of comparative law, foreign soft law researchers have more and more direct concerns about the effectiveness of soft law. For example, in 2019, Miriam Hartlapp, a professor at the Free University of Berlin, published her research on the actual effects of EU soft law in EU member states, pointing out that the legitimacy of soft law is not the key to promoting the implementation of soft law. What really matters is whether the actor can benefit from the implementation. The possible hardening out of soft law is parallel to the implementation of soft law. In 2021, Andreas Zimmermann, a professor at the University of Potsdam, Germany, explored how non-legally binding documents—taking memorandums of understanding as an example—produce legal effects under international law, pointing out that this is mainly due to the interaction between such documents and legally binding documents, and this interaction is provided by many legal mechanisms. In 2020, Arizona State University professor Gary E. Marchant and researcher Carlos Ignacio Gutierrez jointly completed an article on the indirect implementation of AI soft law. They believe that the success of soft law is highly dependent on specific situations, depending on the cost and feasibility of complying with soft law, the incentives for complying with soft law, and the consequences of refusing to comply with or not complying with soft law. They described nine mechanisms and processes that help AI soft law be more effective and more credible, and hinted that there may be more. There are too many related studies to list, but the above examples have shown that: on the one hand, as mentioned earlier in this article, the authors tend to have a basic premise that the effectiveness of soft law depends more on the benefits of complying with soft law to the actors, including increased benefits and reduced disadvantages; on the other hand, the mechanisms that enable actors to obtain benefits and thus promote the universal effect of soft law are far more than incentives, social pressure, and organizational pressure.
However, for the advocates, promoters and researchers of soft law, perhaps a typology of soft law implementation mechanisms is needed to classify the various implementation mechanisms, so as to form a relatively fixed, open and inclusive thinking tool to promote the conscious construction of supporting mechanisms for the implementation of soft law. "Relatively fixed" means forming some clear classification concepts, each of which can accommodate "family-like" specific formalized soft law implementation mechanisms due to its abstractness; "open and inclusive" means that more forms of implementation mechanisms that have already existed in practice or may exist in the future, which are not or cannot be mentioned in this article, can also be accommodated by these types of concepts. This article is to explore what types of implementation mechanisms of soft law can increase the possibility of its effectiveness.
Given that soft law is prevalent in various public governance fields, in order to make the research more focused, this article chooses the soft law implementation of artificial intelligence as the main research object. Artificial intelligence has brought countless benefits, both large and small, to countless researchers, developers, and users. Driven by strong interests, artificial intelligence has developed rapidly, and the position of governments, i.e. public regulators, is more to allow rather than inhibit its development, especially in the initial stage of artificial intelligence. This position is accompanied by regulation based on soft law. Even as the risks of artificial intelligence become clearer, the number of hard law norms for classifying and controlling different risks is increasing, but they cannot completely replace the important position of soft law in this field. It should be pointed out in particular that the soft law form of artificial intelligence governance is mainly ethics. Due to space limitations, this article does not intend to discuss the relationship between science and technology ethics and soft law, although this is also an important topic belonging to the ontology of soft law - what is soft law. Professor Gary Marchant of the United States and Professor Effy Vayena of Switzerland regard the ethical norms of artificial intelligence as a form of soft law, which is also the approach taken by this article.
This article will explore from three aspects. First, based on existing research, the second part describes the current status of soft law governance of AI, pointing out that the "turmoil" of AI ethical norms cannot cover up the huge "effectiveness deficit"; secondly, the third part analyzes the reasons for the "effectiveness deficit" of soft law, and even so, why AI governance needs and still needs soft law; thirdly, the fourth part reveals the mechanisms that help soft law to be implemented and effective, and classifies them, in order to establish a theoretical tool with guiding significance. The final conclusion of this article is a summary of the main points of the full text, and emphasizes that the implementation of soft law and its universal compliance require the combination of value consensus and economic logic, and the combination of internal reasons and external promotion.
2 AI Soft Law and Its “Effectiveness Deficit”
In their article “The Landscape of Global Ethical Guidelines for Artificial Intelligence”, Professor Effie Vayena, Professor Marcellolenca and Dr. Anna Jobin of Switzerland pointed out that in the past five years, private companies, research institutions and public sector organizations have issued a large number of ethical principles and guidelines for artificial intelligence to address concerns raised by artificial intelligence. These ethical guidelines are not legally binding, but persuasive in nature, and can be called non-legislative policy documents or soft law. In order to study whether different groups have reached a consensus on what ethical artificial intelligence should be, what ethical principles will determine the development of artificial intelligence in the future, and if there are differences, where the differences are and whether they can be reconciled, they collected 84 documents containing ethical norms for artificial intelligence worldwide.
The study of these documents shows that: First, the number of ethical norms issued by public sector organizations (including government organizations and intergovernmental organizations) and the private sector (including companies and their alliances) is roughly the same, which means that both fields attach great importance to it. Second, the lack of representation in Africa, South America, Central America, Central Asia and other regions means an imbalance of power in the international discourse on ethical norms for artificial intelligence. Third, more economically developed regions are shaping the discussion of AI ethics, which may raise concerns about local knowledge, cultural pluralism, and global fairness. Fourth, the main AI ethical principles are: (1) transparency; (2) justice, fairness, and equality; (3) non-maleficence; (4) responsibility and accountability; (5) privacy; (6) benefiting humanity; (7) freedom and autonomy; (8) trust; (9) sustainable development; (10) dignity; and (11) social solidarity. Fifth, no principle is common to the entire document library, although transparency, justice and fairness, non-maleficence, responsibility, and privacy are relatively concentrated, with more than half of the guidelines covering them. Sixth, all eleven principles have differences in substantive content, and the main factors that determine the differences are: (1) how to interpret ethical principles; (2) why they are important; (3) what issues, fields, and actors they are related to; and (4) how they should be implemented. Based on these findings, the author of this article believes that: at the policy level, more cooperation is needed among all stakeholders to achieve consistency and convergence in the content of ethical principles and their implementation; for the world, putting principles into practice and seeking coordination between AI ethical norms (soft law) and legislation (hard law) are important tasks to be done next; at present, whether these non-legislative norms will have an impact at the policy level, or whether they will affect individual practitioners and decision makers, remains to be seen.
The implementation issues and effectiveness issues raised by Professor Effie Vayena and others have been explored and answered by researchers before and after the release of their research results: basically ineffective. "Algorithm Watch" is a non-governmental, non-profit organization based in Berlin, Germany and Zurich, Switzerland. Its mission is to strive for a world in which algorithms and artificial intelligence strengthen rather than weaken justice, human rights, democracy and sustainable development. In 2019, the organization released the "Global AI Ethics Guide List", which compiles frameworks and guidelines around the world that aim to establish principles for the development and implementation of automated decision-making systems in an ethical manner. After the list was updated on April 28, 2020, more than 160 guidelines were included in it, including those related to China: Beijing Zhiyuan Artificial Intelligence Research Institute jointly issued the "Beijing Consensus on Artificial Intelligence" (May 25, 2019) with Peking University, Tsinghua University, Institute of Automation of the Chinese Academy of Sciences, Institute of Computing Technology of the Chinese Academy of Sciences, New Generation Artificial Intelligence Industry Technology Innovation Strategic Alliance and other universities, research institutes and industry alliances. The "Artificial Intelligence Industry Self-Discipline Convention (Draft for Comments)" issued by the China Artificial Intelligence Industry Alliance (May 31, 2019). The "New Generation Artificial Intelligence Governance Principles - Developing Responsible Artificial Intelligence" issued by the National New Generation Artificial Intelligence Governance Professional Committee (June 17, 2019).
Obviously, the list compiled by "Algorithm Watch" cannot include all the AI ethics norms in the world in the form of guidelines, principles, norms, initiatives, self-regulatory conventions, etc. First, the number of such soft laws is difficult to count, and the collection at a certain time point may not be complete; second, the generation of such soft laws is not subject to strict restrictions on subjects, procedures, etc., and is very fast and convenient, so new soft laws will appear soon after the time point of collection. Taking China as an example, on July 8, 2017, the State Council issued the "New Generation Artificial Intelligence Development Plan", which mentioned the significance, focus and timeline of the construction of AI ethics norms in many places, although it did not directly propose specific ethical norms. The "Artificial Intelligence Standardization White Paper (2018 Edition)" issued by the China Electronics Technology Standardization Institute on January 18, 2018 has made it clear that the development of artificial intelligence should follow ethical requirements such as the principle of human interests, the principle of transparency and the principle of consistency of power and responsibility, although it is relatively rough and brief. This is the situation before the time point of "Algorithm Watch" collection or update. After that time point, my country's National New Generation Artificial Intelligence Governance Professional Committee released the "New Generation Artificial Intelligence Ethics Code" on September 25, 2021, which systematically proposed six basic ethical norms, including "promoting human welfare", "promoting fairness and justice", "protecting privacy and security", "ensuring controllability and trustworthiness", "strengthening responsibility", and "improving ethical literacy", and provided management, research and development, supply and use norms in series.
However, the inability to include is not the crux of the problem, because "Algorithm Observation" pointed out that there will be more guidelines when it released the preliminary conclusions of this study in 2019, and the organization's observations are more important and more eye-catching. In 2019, "Algorithm Observation" published the article "<Artificial Intelligence Ethics Guide>: Binding Commitment or Window Dressing?" It pointed out that most of the 83 guidelines collected at that time were industry-led, because voluntary self-regulation is a very popular means to avoid government regulation. Companies such as SAP in Germany, Sage, Facebook, and Google in the United States have both internal principles and published general guidelines. Some of these guidelines are issued by companies as members of industry alliances, such as the Partnership on AI, and some are issued by industry associations. Most importantly, few guidelines come with governance or oversight mechanisms to ensure that these voluntary commitments are followed and implemented. In 2020, there were more than 160 guidelines in the Algorithm Watch database, either voluntary or advisory, of which only 10 had implementation mechanisms. Even the ethical guidelines developed by the Institute of Electrical and Electronic Engineers (IEEE), the world's largest professional association for engineers, are largely ineffective, as large technology companies such as Facebook, Google, and Twitter do not implement them, even though many of their engineers and developers are members of IEEE.
The conclusions of the two "Algorithm Observation" reports are basically negative about the effectiveness of the ethical guidelines for artificial intelligence. Moreover, this is not just one person's opinion. Previously, researchers from North Carolina State University in the United States conducted a study in which they found 63 software engineering students and 105 software development professionals and divided them into two groups. One group was explicitly instructed to use the ethical standards established by the Association of Computing Machinery (hereinafter referred to as ACM), and the other group was the control group, that is, they did not see the ACM ethical standards. The researchers asked the test subjects to answer eleven multiple-choice questions with simple situational introductions, each of which involved ethical decisions. The conclusion of the study is: Whether it is students or professional developers, there is no statistically significant difference in the answers to the questions by the test subjects who have seen and have not seen the ethical standards. This shows that ethical standards do not have a substantial impact on software development. AI ethics codes are basically studied and formulated by technical experts (mainly) and legal experts (supplemented). They hope to deal with the ethical issues of AI/machine learning through technical and design expertise, and take design as the center of ethical review. Therefore, the test results for software engineering students and software development professionals above verify the "effectiveness deficit" problem of ethical codes. Behind the large output of AI ethics codes, there are considerable inputs and expenditures, but its income, that is, effectiveness, is far less than the cost, so this article calls it "effectiveness deficit".
So, is the AI ethics code really "almost zero effectiveness" as shown in the above test? This article does not think so. First of all, AI ethics codes are not purely shelved. Such soft laws issued by technology giants have more or less restraining effects on themselves. For example, since Google released the "AI Principles" in 2018, it has released an update report every year, and in the report, it will explain to the public its efforts, progress and lessons learned in practicing the principles. The 2023 report mentioned: "This is the fifth edition of our annual progress report on the AI Principles, and through the annual report, we have consistently been transparent about how we put the principles into practice. We first published the AI Principles in 2018 to share the company's technical ethics charter and keep us accountable for how to responsibly research and develop artificial intelligence. Generative AI is no exception. In this report, we will share in detail the principled approach used in the research and development of new generative AI models, including the Gemini family model. Principles can only be effective when they are put into practice. This is why we publish this annual report - including the difficult lessons learned - so that others in the AI ecosystem can learn from our experience." The authenticity of Google's annual report itself and the extent to which its efforts to practice its principles reflected in the report have implemented its principles still lack neutral, objective and complete evaluation. In 2019, Google announced that it would no longer renew its contract with the US Department of Defense and stop providing it with artificial intelligence assistance to analyze overseas military drone surveillance footage. It is also believed that this decision was made when its employees protested that the project caused ethical controversy and concerns, rather than the result of voluntarily fulfilling its AI ethics standards. Nevertheless, the annual report and its disclosure at least mean that the company is willing to report to the public on its progress in implementing ethical standards for AI, and is also willing to subject itself to extensive supervision and possible criticism at any time.
Secondly, although the application practice of artificial intelligence systems performs poorly in terms of ethical compliance, there is still obvious progress in the application of some principles, such as privacy, fairness, and explainability. For example, many privacy-protecting data set use and learning algorithm use technologies have been developed around the world, which use methods such as passwords, privacy differentiation or random privacy to make the "horizon" of artificial intelligence systems "darken". However, paradoxically, the great progress that artificial intelligence has made in the past few years is precisely because a large amount of data (including personal data) is available. And this data is collected by privacy-invading social media platforms, smartphone applications, and IoT devices with countless sensors.
Furthermore, the ethical norms of artificial intelligence will also be reflected at the "micro-ethics" level. Although at the macro level, the implementation of the ethical norms of artificial intelligence formed by abstract and vague words is not satisfactory, but with the widespread attention paid to the ethical issues of artificial intelligence, the transition from ethics to "micro-ethics" (such as technical ethics, machine ethics, computer ethics, information ethics, data ethics) is also happening, and it has a good effect. For example, Timnit Gebru's research team proposed a standardized data table that lists the properties of different training data sets so that machine learning trainers can check to what extent a specific data set is best suited for their purposes, what the original intention was when the data set was created, what data the data set consists of, how the data is collected and preprocessed, etc. As a result, machine learning trainers can make more informed decisions when choosing training data sets, making machine learning fairer, more transparent and avoiding algorithmic discrimination. This work result in "micro-ethics" has been favored by Microsoft, Google and IBM, which have begun to try out data tables for data sets internally. The "Data Nutrition Project" has adopted some of the results, and the "Artificial Intelligence Partnership" is also building similar data tables.
Finally, in principle, the "enforcement effectiveness" of soft law usually takes some time to show up. The salient feature of soft law is persuasion, not coercion, and the time cost of persuasion is naturally inevitable. However, from the fact that there were few AI ethics codes in 2016 to the fact that so many governments, non-governmental organizations, large enterprises and other entities around the world have issued or updated such codes, it has shown that a moral consensus is being formed, that is, the development and use of AI should bear ethical responsibilities. American philosopher Karl Popper believes that the scientific community has already reached this moral consensus on the issue of nuclear weapons and biological and chemical weapons: recognizing that there are a series of specific threats, a group of specific people, a set of specific tools and a set of specific ideas must be prepared to deal with the threats. From this perspective, AI ethics codes have at least achieved "promotional effectiveness", perhaps like corporate social responsibility, which took decades to partially get rid of the reputation of "greenwashing" or "whitewashing" and set global standards that many companies must follow. Of course, this last point does not want to extend the theme of "execution (implementation) effectiveness" that this article focuses on to "promotional effectiveness" by stealing concepts, but only hopes to add a "time-process" dimension when observing and studying "execution (implementation) effectiveness".
3 The Reasons of “Effectiveness Deficit” and Why Soft Law is Still Needed
3.1 Reasons for the “effectiveness deficit” of AI ethics norms
The AI ethics code, which has been developed for less than ten years, has not achieved the goal of eliminating or greatly alleviating people's concerns and worries about AI ethics, even if its effectiveness cannot be simply reduced to zero. The main reasons are as follows.
First, the non-mandatory nature of AI ethics code. The 2017 report of the "AI Now Institute" pointed out that ethical codes constitute a form of flexible governance, a substitute for rigid traditional government regulation and legal supervision, and have gradually been actively developed in the field of AI, but they have practical limitations. The key limitation is that it assumes that enterprises and industries will voluntarily adopt and comply. The 2018 report continued to point out: "Although we have seen a wave of enthusiasm for formulating such codes, ... we have not seen strong supervision and accountability to ensure the fulfillment of these ethical commitments." This inherent, Achilles' heel-like fatal flaw of soft law has become the recognized fundamental reason for the lack of effectiveness of AI ethics code.
Second, the abstractness and ambiguity of AI ethics code. The ethical standards for artificial intelligence are not aimed at artificial intelligence, but at humans who study, develop and apply artificial intelligence. Its goal is to require researchers, developers and users to follow certain standards to minimize the ethical risks brought by artificial intelligence. Therefore, the more specific and clear the standards are, the easier it is to comply with; otherwise, it will be difficult to implement or there will be various controversial implementations. However, today's ethical standards for artificial intelligence are basically abstract and vague. Most guidelines never use or rarely use more specific terms except the term "artificial intelligence". Artificial intelligence is just a collective term, referring to a wide range of technologies or a huge abstract phenomenon. No ethical guideline has remarkably delved into technical details, which shows that there is a deep gap between the specific context of research, development and application and general ethical thinking. Although abstraction and ambiguity may be considered inevitable and necessary because artificial intelligence is extremely widely used, develops rapidly and has an uncertain future development trajectory, the aforementioned successful examples at the "micro-ethics" level show that relative specificity and refinement are possible.
Third, the dispersion, confusion and duplication of ethical standards for artificial intelligence. Like other soft laws, the formulation of AI ethics norms includes governments, enterprises, enterprise alliances, industry groups, non-governmental public welfare organizations, research institutions, etc., which has formed many forms of ethical norms. The research results of Professor Effie Vayena and others mentioned above show that the terms of AI ethical principles used in various documents may be the same, but there are many differences in the substantive content. Even the most common principle of transparency has significant differences in terms of explanation (communication, disclosure), why transparency, the areas where transparency applies, and how to achieve transparency. "There has been a confusing surge in different AI soft law projects and proposals, causing confusion and duplication in AI governance. It is difficult for actors in the field of AI to evaluate and comply with all these different soft law requirements."
Fourth, there is insufficient motivation for voluntary compliance with AI ethics norms. The non-mandatory nature of AI ethics norms means that it hopes that AI researchers, developers and users can voluntarily comply. AI ethics norms are a projection of human beings' long-standing ethical concerns in the field of AI. The reason why emerging AI technologies have caused widespread ethical concerns and anxiety shows that ethical consensus is prevalent. Nevertheless, the economic benefits that AI brings to entities in many fields—whether it is wealth growth or cost reduction—are so great that ethical concerns based on values or principles are unlikely to prevail over economic logic. In the business world, speed is everything, and skipping ethical concerns is equivalent to taking the path of least resistance. In this sense, ethical good money may turn into competitive bad money and be eliminated by the market.
Fifth, the compliance paradox of AI ethics. Compliance with AI ethics often needs to be reflected in technology, especially in the design stage. Therefore, concepts such as "ethically aligned AI system" or "ethically aligned design" came into being. However, as mentioned above, in some cases, a large amount of data required for ethical design (such as privacy protection technology) is collected under the circumstances of suspected violation of ethical principles (such as privacy infringement). There is no sufficient empirical research data on whether this paradox is widespread, but there is a high probability that AI will first violate ethical principles to fully develop and then consider how to be ethical.
Sixth, the social system dilemma of the influence of AI ethics. In addition to revealing the economic logic of the neglect of AI ethics in implementation, Thilo Hagendorff, a professor at the University of Stuttgart in Germany, also cited the theories of three famous sociologists to analyze from a macro-sociological perspective. It points out that Ulrich Beck, a German sociologist and one of the pioneers of risk society theory, once had a very vivid metaphor that the ethics of today's society "plays the role of bicycle brakes on intercontinental aircraft", which is particularly applicable in the context of artificial intelligence. According to the system theory of Niklas Luhmann, another German sociologist, modern society is composed of many different social systems, each with its own working code and communication medium. Structural coupling can allow the decisions of one system to affect other systems, but its influence is limited and it is difficult to change the overall autonomy of the social system. French sociologist Pierre Bourdieu also said that all these systems have their own codes, target values, and economic or symbolic capital, through which social systems are constructed and based on which decisions are made. This autonomy also exists significantly in the industry, commerce and science of artificial intelligence. Ethical intervention in these systems will only work to a very limited extent.
Seventh, the fatalism that the development of artificial intelligence overwhelms constraints. The fundamental reason for the "effectiveness deficit" of AI ethics norms is that human society's basic stance on AI is determinism or fatalism. The vast majority of AI ethics guidelines describe AI as a force that will drive historic changes in the world. This change is inevitable, far-reaching, and will bring huge benefits to mankind. Human society can only respond, adapt, and take responsibility for its risks and consequences. For example, the 2018 Montreal Declaration states: "Artificial intelligence constitutes a major advance in science and technology that can improve living conditions and health, promote justice, create wealth, enhance public safety, and reduce the impact of human activities on the environment and climate, thereby generating considerable social benefits." The Global Initiative on AI Governance, issued by the Cyberspace Administration of my country in October 2023, also holds a similar position. AI is a new area of human development. At present, global AI technology is developing rapidly, which has a profound impact on economic and social development and the progress of human civilization, and brings huge opportunities to the world. In this deterministic/fatalistic context, not only are technology giants such as Google, Facebook, Baidu, and Alibaba competing to launch new AI applications, but countries have also announced their participation in the AI race, viewing AI as a driving force for solving problems in all areas of human society. Given this, ethical norms that constrain the development of AI to a considerable extent are naturally like bicycle brakes on an airplane.
3.2 Why AI still needs ethical norms as soft law
All of the above directly or indirectly hinder the implementation and compliance of AI ethics norms, and some seem to be fundamental and irreversible. Does this mean that AI governance should not take the path of soft law? The answer is no, because the characteristics of AI development itself are destined to not rely solely on hard law to prevent its risks and reduce its harm. The following are five main reasons why AI still needs to "participate in governance" as soft law ethics norms. Each reason involves the shortcomings of hard law or rigid supervision and the advantages of soft law or flexible governance.
First, the flexibility and speed of soft law. Almost all researchers involved in the field of AI acknowledge the fact that AI is developing at an astonishing speed and is infiltrating all aspects of human life at an equally astonishing speed. As a result, human society is rapidly undergoing transformation and changes that are difficult to predict the future. Hazards have already begun to emerge, and risks are quietly lurking. More driven by the aforementioned economic logic that is prevalent in the public and private sectors, this trend seems to be decisive and fatalistic. How to control and prevent hazards and risks has therefore transformed into a "pacing problem" of the legal system. As early as 1986, the US Office of Technology Assessment mentioned: "Technological change, which used to be a relatively slow and dull process, is now faster than the legal structure that governs the system, which puts pressure on Congress to adjust the law to adapt to technological change." The pace problem faced by the legal system is reflected in two aspects. First, many existing legal frameworks are based on a static view of society and technology rather than a dynamic view; second, the ability of legal institutions (legislative, regulatory and judicial organs) to adjust and adapt to technological changes is slowing down. The existence of the pace problem has exacerbated concerns about the hazards and risks of artificial intelligence. Compared with the bureaucracy, formality and cumbersomeness of formal legislative procedures, the formulation and updating of soft laws are much more flexible and quick. As mentioned above, the subjects of the formulation of artificial intelligence ethics norms are diverse, and there are no strict procedural restrictions. It is easier to transform people's ethical concerns into principles to guide artificial intelligence research, development and application in a timely manner. Although these principles are abstract, vague, ambiguous and lack mandatory binding force, the factual binding force of publicly announced ethical norms is not completely zero.
Second, the diversity and adaptability of soft law. "Artificial intelligence" is just an abstract term, which refers to a wide range of technologies that are endless and numerous. Each technology may bring about more specific ethical concerns and also requires specific technical solutions. For example, the implementation of an algorithm in the healthcare system in Arkansas, USA, had a negative impact on patients with diabetes or cerebral palsy, resulting in a significant reduction in the healthcare they could receive; the recommendation algorithm used by YouTube was developed by Google, which relies on a feedback loop and aims to optimize users' viewing time, but while predicting what people like to watch, it also determines what people watch, thus fueling sensational false videos and conspiracy theories; Google once had a bias that any searched name that was a historically black name would imply a criminal record in the search results, while the search results for historically white names would be relatively neutral; and artificial intelligence/machine learning face recognition technology was once accused of not recognizing people of color (especially black people) enough, so Microsoft began to publicize its efforts in "inclusiveness" to improve facial recognition functions for different skin colors, but some commentators believed that such technical improvements would be more detrimental to the black community, because the black community has historically been the target of surveillance technology. Examples such as these involving concerns about the ethics of artificial intelligence are enough to show that comprehensive, unified supervision based on hard laws is likely to fall into the dilemma of being unable to adapt to diverse technologies and diverse ethical requirements. Even sometimes, regulation is anti-market and disadvantageous to small businesses, and the obstacles it creates can only be overcome by large enterprises. In contrast, soft law is not mainly formulated by the government. Enterprises, industry organizations, enterprise alliances, non-governmental organizations, etc. can all formulate corresponding and more adaptable guidelines for more specific technical ethical issues.
Third, the cooperative experimentation of soft law. Although soft law does have the characteristics of dispersion, confusion, and duplication, the existence of a variety of soft law schemes has brought space for selective experimentation to the research, development, and utilization of artificial intelligence. Stakeholders - including but not limited to government and enterprises - sometimes form a cooperative relationship rather than an antagonistic relationship. This is different from the previous regulatory opposition between the government and enterprises and the competitive opposition between enterprises. In this cooperative relationship, there are also elements of mutual learning and mutual benefit. For example, as mentioned above, when Google released the 2023 annual report of the "Artificial Intelligence Principles", it claimed that it also intended to share the experience and lessons of applying principles in the research and development of new models. One of the institutions that plays a huge role in promoting the ethical norms of artificial intelligence is IEEE, a joint organization of electrical and electronics engineers around the world. The Global Initiative on Ethics of Autonomous and Intelligent Systems initiated by it aims to address ethical issues raised by the development and dissemination of autonomous and intelligent systems.
It identifies 120 key issues and proposes suggestions for solving these issues for enterprises to choose. How can the research, development, and use of artificial intelligence - a specific technology for a specific scenario - better comply with ethical standards, or, in other words, what specific and detailed ethical standards are suitable for a specific artificial intelligence technology in a specific scenario? This is not a question with a definite answer, nor is it a question that a professional team alone can come up with the best solution. This requires the cooperation and exploration of technical experts, legal experts, etc., and requires continuous experimentation. This is something that hard law and hard regulation cannot achieve.
Fourth, the factual pressure of soft law. Although soft law has no legal binding force, if its content has a broad consensus in essence and is very persuasive, then individuals and organizations who choose not to comply with soft law must bear the factual recognition pressure. When this recognition pressure is sufficient to overwhelm the benefits that non-compliance may bring, the recognition pressure will be transformed into a de facto binding force. Therefore, "research on ethical concerns shows that multiple frameworks, concepts, definitions and their combinations create a complex range of options for organizations to choose from. When the importance of the problem and the support that the organization can get are still uncertain, numerous guidelines make organizations have to withstand criticism of their work processes. ... Choosing a work process ethics guide provides a bottom line for internal and external stakeholders of the organization to evaluate the organization's application products."
Fifth, the transnational applicability of soft law. The research, development and use of artificial intelligence are global and transnational, especially its use on or through the Internet; the ethical concerns and worries raised by artificial intelligence are also global and transnational. Even if a platform, a company or an application is exposed to a specific risk or scandal of ethical misconduct in artificial intelligence, it does not mean that its impact is limited to the country where the platform or company is registered, nor does it mean that the risk or scandal of ethical misconduct of such technology will not appear in other countries, other platforms, other companies or other applications. For example, ChatGPT, developed by Microsoft-backed OpenAI, was launched just over two months ago. The risks of plagiarism, fraud and misinformation spread brought by similar applications have attracted attention. Thierry Breton, the EU Internal Market Commissioner, mentioned the urgency of establishing global standards in an exclusive interview with Reuters. Traditional hard laws and hard regulations are mainly limited to the territorial jurisdiction of sovereign states or regional international organizations based on treaties. The reason why they are legally binding is that they are authorized and recognized by the basic norms of sovereign states or the basic treaties of regional international organizations. Therefore, if we want to deal with the ethical risks of artificial intelligence on a global scale, the general promotion of soft laws/ethical norms that cross national or regional boundaries in the field of artificial intelligence should be an optional option.
Of course, in the ecology of the Internet economy and the global economy, large technology companies that want to expand their business to markets outside their registered countries will certainly pay attention to and comply with the legal (hard law) system of the legal jurisdiction where the market is located. Therefore, the hard laws formulated by transnational legal jurisdictions such as the EU, such as the General Data Protection Regulation (GDPR) and the latest Artificial Intelligence Act, actually have the significance of setting standards for the world, resulting in the so-called "Brussels effect". However, this effect is indirect in two senses. First, it will only affect the legislation of other sovereign countries such as China or the United States, and will not usually be copied by the latter. Second, it will only bind technology companies that intend to enter the EU market, and will not directly bind the research and development and use of artificial intelligence by other smaller technology companies that have no intention of entering the international market. It is expected that more consensus will be reached on the ethical norms of artificial intelligence on a global scale, and they will cross the boundaries of the laws (hard laws) of sovereign countries or regional organizations such as the EU to play their role, although they cannot be effective as expected now.
4 Implementation Mechanism of AI Soft Law
On the one hand, AI ethics have their own reasons for their rise, existence and unique value, and have begun to have the "promotion effect" of building consensus and general recognition; however, on the other hand, the research and development and utilization of AI seem to be far from being effectively affected by the ethical norms of soft law nature, and the professionals involved have not yet closely combined the ethical norms with program design, so that many new products and applications of AI will occasionally cause widespread concern about the ethical risks they bring. So, how can AI ethics be implemented, transformed from factual pressure to factual constraints, and cooperate with corresponding hard laws to jointly complete the mission of responding to the ethical risk challenges of AI? What general inspirations and conclusions can be obtained from the proposition of how to effectively implement soft law? Since soft law does not have compulsory enforcement power in principle and cannot be directly implemented by force, this article discusses what types of mechanisms are needed to indirectly promote the implementation of soft law.
4.1 Organizational Mechanism for Soft Law Promotion
The implementation of soft law is a gradual process that requires constant self-update, consensus, and drive. This is especially true for AI soft law, which assumes the function of preventing and governing future uncertain risks. In this process, it is unimaginable that there is no strong organization that is continuously and firmly engaged in the promotion of soft law. In terms of type, such organizations can belong to the government series or the enterprise series, and can also be industry organizations, enterprise cooperation alliances, third-party institutions, research teams, etc. Among them, large technology giants - such as Microsoft and Google - also have dedicated AI ethics departments or teams. In terms of function, such organizations can continuously formulate and update AI ethics standards, advocate that global AI developers and users join in and jointly abide by AI ethics standards, observe and supervise the implementation of AI ethics standards, or study how to combine AI ethics standards with the design and application of specific technologies.
Government organizations may be entangled in how to balance the development of the AI industry with the adherence to ethical standards, and slack off in urging the implementation of AI ethics standards. Companies, industry organizations, or corporate alliances may focus on window dressing and reputation and be less credible in terms of AI ethics. Even if companies set up a dedicated AI ethics department or team to fulfill their ethical commitments, the independent role of the department or team may not be fully guaranteed. For example, in 2020, Google fired Timnit Gibru because she published a paper criticizing large language models (which became popular two years later). The anger caused by this led to the resignation of several senior leaders in the AI ethics department and weakened Google's credibility on responsible AI issues.
Relatively speaking, organizations that aim to closely observe AI risks, continuously publish follow-up research reports, and take it as their responsibility to supervise and promote AI in accordance with ethical standards, as well as research teams (whether or not within the company) that are committed to integrating ethical standards into the development and use of AI, will have higher credibility and driving force. For example, one comment pointed out: "There is no shortage of reports on the ethics of artificial intelligence, but most of them are insignificant and full of clichés such as "public-private cooperation" and "people-oriented". They do not acknowledge how intractable the social dilemmas caused by artificial intelligence are, nor how difficult it is to solve these dilemmas. The new report of the "AI Now Institute" is not like this. It ruthlessly examines the technology industry's race to reshape society along the direction of artificial intelligence without any reliable and fair results. "The "Dataset Data Table" released by Timnit Gibru's research team has gone through eight versions from its first release on March 23, 2018 to December 1, 2021, and has been cited 2,263 times. Of course, the existence of reliable and powerful organizations promoted by soft law is usually based on the institutional space for the survival and development of such organizations - public institutions or internal corporate institutions.
4.2 Pressure mechanisms for soft law compliance
Soft law is factually pressured because it is based on broad consensus and persuasiveness. It only gives actors the choice of voluntary compliance. When soft law is recognized by more and more members in the community, compliance with and compliance with soft law will gain relatively high praise from the community. On the contrary, even if violating soft law does not bring strong sanctions to actors, it will also put them under great pressure, or even huge reputation damage and possible accompanying economic damage. So, what mechanism can make this pressure strong enough? At least, there can be three important mechanisms:
First, the public opinion mechanism. For companies seeking survival in the market, public opinion on them and their products is undoubtedly crucial, and consumers usually choose products with high public opinion evaluation. Therefore, in an open public opinion environment, the news media can regard whether technology companies and their artificial intelligence products comply with ethical standards, and even whether other companies are using artificial intelligence applications that comply with artificial intelligence ethical standards, as an important part of the evaluation system, thereby forming a strong enough public opinion pressure to prompt companies to develop or use artificial intelligence responsibly. However, in addition to an open public opinion field, public opinion pressure also requires two other conditions to form a certain effect: first, consumers care about companies and their products that comply with AI ethics standards; second, consumers can choose companies and their products that comply with soft law in the competitive market.
The second is the confrontation mechanism. Public opinion criticizing companies for not caring about or neglecting AI ethics standards is itself a form of confrontation. It should be pointed out here that professionals or stakeholders take actions against companies against AI ethical risks, whether they are inside or outside the company. In addition to the example mentioned above that Google stopped cooperating with the US Department of Defense on military AI projects under protests from its employees, in 2019, Google also disbanded its AI Ethics Committee (officially known as the "Advanced Technology External Advisory Committee"), which had just been established for more than a week, under protests from thousands of employees, because members from outside the company or their organizations were accused of making unfair comments about transgender people, being skeptical about climate change, or being related to the military use of AI. In 2018, when then-US President Trump's policy of separating illegal immigrant children from their families was controversial, Microsoft's cooperation with the US Immigration and Customs Enforcement on facial recognition technology was also protested by Microsoft employees. From May 2 to September 27, 2023, the Writers Guild of America, which represents 11,500 screenwriters, organized a 148-day strike due to a labor dispute with the Alliance of Motion Picture and Television Producers. One of the demands of the strike was that artificial intelligence such as ChatGPT should only be used as a tool to help research or promote script ideas, and should not replace screenwriters. In the end, the strike was victorious, and the agreement reached by the two parties was considered to have set an important precedent for collective bargaining on the use of artificial intelligence. These protests from professionals or stakeholders are based on their understanding and adherence to the ethical norms of artificial intelligence, or because their own interests are threatened by the development of artificial intelligence. Their claims may not be correct, but they are indeed a force and mechanism that can promote companies to comply with soft laws. "More and more meaningful actions for the responsible development of artificial intelligence come from workers, community advocates and organizers." The existence of this force and mechanism, of course, also needs to rely on the broader institutional space and cultural background of the relationship between enterprises and employees, and between enterprises and the outside world.
The third is the supervision mechanism. In terms of supervision in a broad sense, public opinion and confrontation also belong to the supervision mechanism. However, there are other more diverse forms of soft law compliance supervision. As early as 2015, Professor Gary Marchant and Mr. Wendell Wallach proposed the establishment of an organization called the "Governance Coordination Committee". The purpose is not to repeat or replace the work of many existing organizations in AI governance, but to play a coordinating role like the conductor of a symphony orchestra. This organization has not been established, but they presume that many of the functions it should undertake are related to supervision, such as monitoring and analysis (identifying gaps, overlaps and inconsistencies in the implementation of AI governance plans), early warning (pointing out emerging new problems), evaluation (scoring the achievement of goals of governance plans), and convening to solve problems (convening stakeholders to discuss solutions to specific problems). In other words, combined with the organizational mechanism described above, if there is a relatively independent organization - whether it is an organization similar to the Ethics Review Committee established within the enterprise, or a more neutral social organization established outside the enterprise - to undertake the supervision functions of monitoring, analysis, early warning, evaluation, and joint consultation, the ethical standards of AI can be better implemented.
4.3 Incentive Mechanism for Soft Law Compliance
If the pressure mechanism of soft law compliance is a "minus point" that may cause AI developers and users to suffer reputational losses and accompanying economic losses, then the incentive mechanism of soft law compliance is the corresponding "plus point" that can enable them to gain better reputation and bring more economic benefits. Such an incentive mechanism seems to have more forms of expression than the pressure mechanism.
The first is the certification mechanism. A neutral third-party certification agency can open a certification business to certify enterprises or other entities that follow a set of specific ethical standards in the development and use of AI, and issue certification certificates.
The second is the evaluation mechanism. A neutral third-party organization, such as a university research institute or a non-governmental social organization, can evaluate whether AI developers have embedded AI ethical standards into the research and development of AI, whether AI users have applied AI that complies with ethical standards, and the degree of compliance of developers and users with AI ethical standards, and select outstanding compliance.
The third is the purchasing mechanism. The research and development of AI applications will invest considerable costs, and those that comply with ethical standards may invest more. For soft law compliant enterprises or other entities, certification and evaluation can bring good reputation, but it does not translate into actual economic benefits. In contrast, purchasing and using AI products that comply with ethical standards, especially AI products that have been certified or rated, is the most direct way for compliant parties to obtain actual benefits. If purchasers, especially government purchasers, can make compliance with ethical standards a prerequisite for purchase, it will inevitably lead to a market orientation that is conducive to the implementation of AI soft law.
Fourth, the cooperation mechanism. AI stakeholders - researchers, developers, and users - form alliances or partnerships in advocating and promoting AI ethical standards, and support and help each other, which is also more conducive to building public trust and helping AI soft law to be implemented honestly and reliably.
Fifth, the funding and publication mechanism. Institutions that provide investment or funding for the research and development or use of AI, and professional magazines that provide a platform for publishing AI research and development results, can also use compliance with AI ethical standards as a condition or priority to encourage developers and users to comply with AI soft law.
Sixth, the relaxation of regulatory mechanisms. Government departments responsible for regulating the development of artificial intelligence can appropriately relax supervision for enterprises or other entities that have a complete set of systems and supporting institutions for managing the development or use of artificial intelligence, are committed to complying with artificial intelligence soft law, and truly develop or use artificial intelligence products that comply with ethical standards. The benefits of reducing government supervision are considered to be one of the important incentives for the success of artificial intelligence soft law.
4.4 Technical methodology mechanism of soft law
AI soft law is closely related to science and technology, and is therefore widely believed to be developed by AI experts. As an alliance, the "AI Partnership" distinguishes between "the public" and "stakeholders". The former are those who need education and investigation, while the latter are scientists, engineers and entrepreneurs who need education and investigation. It also distinguishes stakeholders into "experts" and "other stakeholders". The former are leaders in the scientific community who create or respond to AI, while the latter are product users in a wide range, large companies that purchase AI solutions, or large companies whose fields are completely changed by AI. "Experts make AI happen, and other stakeholders make AI happen to them." Precisely because of this, the most important thing for implementing AI soft law is for professionals to carry out "ethical design" and develop "ethical AI systems" during the process of technology development. How professionals can embed ethical values into the development of AI/automated systems requires the support of technical methodology.
In addition to the "data table of the data set" studied by Timnit Gibru's team, there is also a method called ECCOLA led by Ville Vakkuri, a postdoctoral researcher at the University of Vaasa in Finland. This method is a modular, sprint-by-sprint process that aims to promote ethical considerations in the development of artificial intelligence and automated systems and to be combined with other existing methods. Specifically, ECCOLA has three goals: (1) to promote awareness of the ethics of artificial intelligence and its importance; (2) to create a module suitable for various system engineering scenarios; (3) to make the module suitable for agile development and make ethics an integral part of agile development. ECCOLA has undergone iterative development and improvement over many years of practice. There are countless such examples.
4.5 Benchmark mechanism for the concretization of soft law
As mentioned above, many AI ethical guidelines or principles are abstract and vague, mainly because the makers of the guidelines or principles hope to apply them to the broad field of AI as much as possible. However, how to implement and comply with these broad norms in the development or use of specific AI can also become a thorny issue for actors who want to comply with regulations. Therefore, in addition to technical methodology, which is often a methodological framework or module that is generally applicable to multiple scenarios, it is also necessary to develop more targeted ethical benchmarks in combination with the specific ethical concerns raised by the use of specific AI. Yueh-Hsuan Weng and Yasuhisa Hirata, researchers at Tohoku University in Japan, once published an article discussing the ethical design of assistive robots. The article pointed out that robots for bed transfer assistance, bathing assistance, walking assistance, excretion assistance, monitoring and communication assistance, and nursing assistance each have relatively prominent and different ethical concerns and need to be treated separately. Although their research does not intend to point to or formulate any benchmark for AI ethics, it is actually of benchmark significance to point out the special ethical issues that each type of robot needs to deal with by combining the characteristics of human-robot interaction. This has a more targeted guiding role for companies or their technical personnel to comply with AI ethics.
4.6 Interaction Mechanism between Soft Law and Hard Law
Whether in the field of international law, where soft law first emerged, or in the field of AI soft law, there have been empirical studies showing the possible future prospects of soft law becoming hard law, or soft law being absorbed into the hard law framework, which will add motivation or pressure to the implementation of soft law. For example, Professor Andreas Zimmermann found in his research on international soft law that in the early stages, non-legally binding agreements may have already stipulated conditions that countries are willing to accept in the future as part of future legally binding treaties. Such memorandums of understanding are the forerunners of future treaties and have a "pre-law function" that can be better implemented. As far as AI soft law is concerned, soft law that is field-tested in the initial stage may later be formally legislated and incorporated into the traditional regulatory system. For example, the Future of Life Institute once released the Asilomar AI Principles in 2017, and now the state of California in the United States has written these principles into state legislation.
In addition to this prospect of future legalization (hard law), if AI ethics can have a place in the implementation of hard law, it will also drive companies and other entities to comply with it. For example, in the United States, if a company fails to fulfill its public commitment to AI ethics, the Federal Trade Commission can regard it as "unfair or deceptive" business practices and take corresponding measures. In the context of international law, international courts and adjudicative bodies also frequently rely on non-legally binding agreements as a guide to interpret legally binding treaties. Of course, this process of absorbing soft law into the interpretation and application of hard law can also be regarded as another form of hard law; in a sense, the AI ethics standards at this time are no longer pure soft law.
5 Conclusion: Taking Soft Law Implementation Seriously
The widespread existence of soft law does not mean that it is actually observed and implemented. Soft law in the field of artificial intelligence - a variety of artificial intelligence ethical norms - has been proved by many researchers to have the problem of ' effective deficit '. The formulation of norms and initiatives have invested a lot, but the results have been minimal. Of course, the ethical norms of artificial intelligence are not completely ' zero utility '. They have imposed certain constraints on many technology giants. Privacy, fairness, interpretability and other norms have been obviously valued. Some progress has been made in the ' micro-ethics ' of special issues, and its ' promotion effectiveness ' has also appeared in the community of artificial intelligence research and development and utilization. Even so, the huge gap between the ethical norms of artificial intelligence and reality is still very worrying.
There are at least seven reasons for this gap. However, the existence of these factors does not make " soft law meaningless " an inevitable conclusion. Due to the flexibility, diversity, cooperation, experimentality, factual pressure and transnational applicability of the ethical norms of artificial intelligence, it still has a unique value that hard supervision / hard law cannot match and complete the ethical governance task of artificial intelligence together with hard supervision / hard law. Therefore, how to make the ethical norms of artificial intelligence more fully realized and how to promote its indirect implementation through a series of mechanisms have become a problem that needs to be taken seriously.
According to the empirical observation of reality, the indirect mechanism that helps the implementation of artificial intelligence ethical norms may logically extend the general classification of soft law implementation mechanisms. However, this taxonomic study needs to be further explored. Not all mechanisms have been fully discussed here, and the mechanisms proposed here are not applicable to all situations of soft law implementation. For example, for soft law, which is not particularly technical and professional, the mechanism of technical methodology is not necessarily necessary ; for the soft law that is specific and detailed enough, the specific benchmark mechanism can also be ignored.
The makers and advocates of soft law certainly hope that soft law can play a practical role in flexible guidance, but the acquisition of this role cannot rely solely on the inherent persuasiveness of soft law, and cannot rely solely on the conscious recognition and compliance of actors directed by soft law. The value consensus needs the assistance of the economic logic of cost-benefit calculation, so that more actors are willing to pay for the implementation of soft law. The effective combination of internal reasons and external impetus - flexible rather than mandatory impetus - can make soft law not only become declaration, initiative and whitewash. The typological study of the implementation mechanism of soft law has important guiding significance for the makers, advocates or promoters of soft law to consciously carry out the corresponding mechanism construction.
The original article was published in the 6th issue of "Financial Law" in 2024 (pages 108-127), and was reprinted from the WeChat public account "Financial Law".