[author]Dai yibin
[content]
On the Approches to Ethics of Artificial Intelligence
Dai yibin
Abstract: There are many different approaches to the research of artificial intelligence ethics. The orientation approach advocates solving the ethical problems of artificial intelligence by identifying its social status. The embedded research approach proposes to avoid the ethical problems caused by artificial intelligence by embedding moral algorithms for artificial intelligence. The normative approach advocates that the development direction of artificial intelligence should be regulated by formulating corresponding ethical norms for artificial intelligence. This paper reveals that there are many difficult problems in these research approaches. Based on moral particularism, we can provide a better solution to the ethical problems of artificial intelligence.
Keywords: Artificial Intelligence; Ethics; Moral Particularism
Artificial Intelligence, as a revolutionary technology, has aroused widespread concern in the society and at the same time caused many people to worry. There is a general concern that AI may cause many problems, such as unemployment, privacy, and liability. In order to solve these ethical problems, philosophers have adopted a variety of different research progressions. Emre Kazim and Adriano Soares Koshiyama distinguish three of these approaches based on the reasons why ethical problems with AI have arisen: principle, process, and ethical awareness. The principle-based approach attempts to solve the ethical problems encountered by AI by regulating the use of AI technologies, the process-based approach attempts to avoid the ethical problems of AI by designing appropriate technologies, and the ethical awareness approach attempts to avoid the ethical problems that may arise in AI by regulating the users and designers of AI. This paper, on the other hand, argues for distinguishing different research approaches to AI into the following three: the first research approach solves the ethical problems of AI by determining the social status of AI; the second research approach avoids the ethical problems that may arise from AI by embedding moral algorithms into AI; and the third research approach solves the ethical problems of AI through social regulation. Among them, the second research approach roughly corresponds to the process-based research approach described by Kazim and Kishiyama, and the third research approach roughly corresponds to the principle-based research approach, which represent the restriction of the ethical problems of AI from the perspectives of self-regulation and other-regulation, respectively. The ethical awareness approach is not the focus of our attention because it involves a lot of uncertainties, and Kazim and Kishiyama have talked less about this research approach. In addition to this, by determining the social status of AI can help us solve the ethical problems of AI to a certain extent, which is one of the reasons why many scholars talk about whether AI has a subject status or not. For convenience, we will refer to these three research approaches as “positional research approach”, “embedded research approach” and “normative research approach”. Based on the discussion of these three research approaches, this paper attempts to propose a new research idea for solving the ethical problems of AI.
1.Research Approach to the Ethics of Artificial Intelligence: Positional Approach
The first research approach of AI ethics, i.e., the positional research approach, advocates solving the ethical problems that may be brought by AI by determining the social status of AI, and the idea can be summarized as follows: if we can determine the social status of AI, then we can solve the ethical problems of AI in accordance with the existing social and ethical system. It contains two assumptions: (1) the existing ethical system will not change significantly in a short period of time; (2) we are able to determine an appropriate social status for AI under the existing social system. The first assumption is generally considered reasonable. This is because there seems to be no reason to think that the existing ethical system will change so dramatically in a short period of time that we will not be able to solve the ethical problems of AI in accordance with the current ethical system even if we can determine the social status of AI. In fact, considering that the ethical system of human society has not changed much over a long period of time, we should recognize the reasonableness of the first assumption. Therefore, for the first research progression of AI ethics, the crux of the problem lies in how to determine an appropriate social status for AI under the existing social system.
Although human society has evolved over a long period of time, its cognitive system has not changed much and still maintains the distinction between subject and object. As of now, people generally tend to regard human beings as subjects and existences opposite to subjects as objects. For humans, common objects include natural objects as well as ordinary artifacts. Proponents of the locative research approach expect to determine the ethical responsibility of AI by determining its subject or non-subject status. The problem is that there are many insurmountable difficulties in considering AI either as a subject or a pure object.
First of all, considering AI as a subject does not seem to be in line with the current development of AI. Although artificial intelligence, as a new technology, is developing exceptionally rapidly, as of now, it seems that artificial intelligence can only excel in a single field and cannot surpass human intelligence as a whole. This is because AI is always limited by algorithms, but no algorithm can solve all problems; moreover, AI will always face the framing problem when dealing with various data, and it is impossible to determine which type of data collected by AI is related to which task but not to others. These mechanistic problems determine that AI cannot break through its own limitations and become a real subjective being like human beings.
Second, considering AI as a subjective being will bring a lot of confusion to the society. Every subjective existence not only possesses the basic rights as a subject, but also bears the responsibilities that a subject should bear. This means that considering AI as a subject will give AI corresponding rights and responsibilities. These rights include at least the right to life and health similar to human beings; while the responsibilities associated with them include at least legal responsibilities, moral responsibilities, and so on. However, it is obvious that whether to give artificial intelligence some kind of right to life, or to require it to bear some kind of legal responsibility, are untenable. Because granting AI some kind of right to life means that we can't end the physical life of AI at will, and it is unreasonable to destroy AI products even under some special circumstances, such as, for example, when AI is about to cause harm to human pets. This is because the AI has its own right to life. By the same token, asking the AI itself to take some kind of responsibility seems to cause a lot of resentment. This is because, strictly speaking, the AI product has no viable means of assuming responsibility. It does not provide compensation for victims in any substantial sense.
Similarly, treating AI as an object in the full sense of the word will face many difficulties. From the point of view of the working mechanism of artificial intelligence, artificial intelligence has a certain degree of autonomous decision-making ability.For example, the “Weiwei” poetry-composing robot can autonomously generate some new poems without human intervention, an attribute that distinguishes it from traditional machines. Considering intelligent machines as pure objects will erase the difference between intelligent machines and traditional machines, which is not conducive to obtaining a true understanding of artificial intelligence. On the other hand, treating AI as a pure object can lead to confusion on the issue of responsibility. This is because the existing ethical system advocates that a pure object bears no responsibility, and that the problem of responsibility resulting from it is borne by the user or creator. The problem is that either claiming that the user bears the responsibility problem caused by AI or claiming that the creator bears the responsibility problem caused by AI will cause dissatisfaction. Users are not directly involved in the decision-making process of AI, and asking users to bear the responsibility is not in line with our understanding of the subject of responsibility; similarly, the decision-making process of AI is generally beyond the prediction range of the creators, and asking the creators to bear all the responsibility problems of AI will discourage the enthusiasm and motivation of the creators, which is not conducive to the development of the whole industry of AI.
In recent years, some scholars have attempted to circumvent the irrationality of the above positioning by giving AI a certain “quasi-subject” status. The core point of this kind of positioning is that neither to give AI some kind of subject status, nor to give AI pure object status, but to try to find a kind of quasi-subject status between the subject and the object, which is similar to the subject but different from the subject. This quasi-subject status is a human-given subject status, which is canceled by humans under certain special circumstances. Just like a human pet, it has certain rights but cannot bear substantive responsibilities. Undoubtedly, granting quasi-subject status to AI has many advantages and can theoretically overcome the difficulties that may arise from treating AI as a true subject versus a pure object, but it is doubtful that such a position can truly solve the ethical problems that AI may bring.
First, our existing system of ethical responsibility is based on the dichotomy between subject and object, and this ethical system does not leave enough room for interpretation of quasi-subject status, which leads us to be unclear in what sense the quasi-subject status of AI is valid. The analogy of the social status of pets to the quasi-subject status of AI is also inaccurate. This is because there are significant differences between AI and pets. Humans can feel some of the joys and sorrows of pets by virtue of their own empathy, but the joys and sorrows shown by AIs may not be able to trigger the true emotions of humans.
Second, granting quasi-subject status to AI does not solve the ethical problems that AI may bring. This is because the existing ethical system does not clarify the ethical relationship of quasi-subject status. Taking intelligent driverless cars as an example, we can see this clearly. The fact that intelligent driverless cars have quasi-subject status does not solve the responsibility dilemma of intelligent driverless cars. On the one hand, the quasi-subject status of the intelligent driverless car does not require the intelligent driverless car to take responsibility because it is not a real responsible subject; on the other hand, granting quasi-subject status to the intelligent driverless car does not provide any referable answer to the question of attribution of responsibility. That is to say, in the case of granting quasi-subject status to intelligent driverless cars, we are still unable to deal with the difficult question of the responsibility of intelligent driverless cars.
The above two reasons are enough to show that in the existing ethical system, the quasi-subject status of AI is likely to have certain conceptual advantages only, and does not play any substantial role in the practical process of solving the ethical problems of AI.
2.Research Progression of Embedded Artificial Intelligence Ethics
The second research approach to AI ethics, the embedded research approach, advocates circumventing the ethical problems that may arise from AI by designing AI as a morally sensitive intelligence.The core idea can be summarized as follows: if we can build in a moral algorithm similar to human moral consciousness for each AI product, and through the moral algorithm to deal with various ethical choices that the AI may face, then the AI can avoid all kinds of ethical problems. This research idea is equivalent to designing AIs as artificial moral agents (AMAs), so that when AIs encounter ethical and moral problems, they can autonomously make choices that are in line with the existing ethical and moral system. Intuitively, if AI has the ability to make moral judgments that are equivalent to or even beyond those of individuals, then the ethical problems brought about by AI will obviously not be an obstacle to the development of AI. The question is, can we design AI as an artificial moral actor that conforms to the human ethical system? Is it true that AI with moral decision-making ability can circumvent all ethical problems?
Generally speaking, there are only three ways of realizing the design of AI into artificial moral actors that conform to the human ethical system, i.e., top-down realization, bottom-up realization, and hybridization, but there are insurmountable problems in either realization. We begin by examining the first type of realization.
The first realization is called top-down realization because it advocates simplifying the ethical system of human society into some moral principles and using code to embed such principles in AI algorithms to guide the AI's moral decisions. However, it is doubtful that this realization is feasible.
First, in the ethics community, philosophers have not reached a consensus on the basic theory or fundamental principles of ethics. It is not clear whether one should support utilitarianism, deontology, or virtue ethics. This meant that the top-down approach to realization faced difficulties in making theoretical decisions from the outset. Some scholars have tried to circumvent the ethical theoretical choice dilemma by advocating the direct design of universal ethical laws for AI to regulate the moral behavior of AI, which is exemplified by the “Three Laws of Robotics” put forward by Isaac Asimov in the 1940s in the science fiction novel “I, Robot “: (1) a robot must not harm an individual human being, or witness an individual human being in danger and do nothing about it; (2) a robot must obey the commands given to it by a human being, with the exception of when the commands are in conflict with the first law; and (3) a robot must protect itself as much as possible without violating the first and second laws. However, even the “Three Laws of Robotics”, which are so intuitively human, have caused a lot of controversy. Asimov himself also reflected on this, and even later modified the “Three Laws of Robotics” to “Four Laws of Robotics”. These facts show that choosing the most basic ethical principles for AI is not as simple as some people think.
Second, even if philosophers can choose the most basic ethical principles for AI, it does not mean that these ethical principles are applicable to all ethical scenarios. Because different ethical principles may produce contradictory results in the same scenario. Asimov's portrayal of the “Three Laws of Robotics” is a typical example. Asimov's portrayal of the Three Laws of Robotics is a typical example of this, because his novel itself is centered on the conflicts caused by the Three Laws of Robotics. In fact, considering the different choices people make in different versions of the trolley dilemma, we should understand that a single ethical principle does not apply to all ethical scenarios. This is an important reason why the top-down realization is declining in the academic world.
The second realization, the bottom-up realization, advocates training AI through data into AI actors that conform to human ethics. In essence, this is a data-driven realization approach. The most popular AI technology in academia at present, namely deep learning, is a typical representative of this realization approach. Data-driven AI has achieved many unexpected results, with the AlphaGo Go robot being one such example. However, there is still a lot of controversy about the feasibility of this realization in terms of ethical issues.
First of all, moral decisions are context-sensitive, and different moral data may lead to completely different moral decisions. From this perspective, preparing the right moral data for AI is a crucial matter. The problem is that it is not clear what kind of moral data is appropriate. This problem may be even more obvious when considering that people in different cultures and ethnic groups may have different views on the same moral event.
Second, AI algorithms have black box characteristics, and one does not know what choices the moral algorithms will make given sufficient data. This means that even if we are able to select appropriate ethical data for AI, AI's ethical decision-making is still uncertain; it may make ethical decisions; but at the same time, it may also make unethical decisions. In other words, AI has the possibility of learning “good” and learning “bad”. The latter is clearly not what we expect.
The third realization approach, the hybrid approach, advocates a combination of top-down and bottom-up realizations in order to design appropriate artificial moral actors. According to some scholars, the hybrid approach seems to be a natural choice in the face of the many difficulties associated with both top-down and bottom-up realization approaches. Although the hybrid approach seems to be more promising when compared with top-down and bottom-up realizations, the hybrid approach itself has some difficult problems, for example, how the hybrid approach integrates top-down and bottom-up realizations is a difficult problem to deal with when designing ethical AI. Wendell Wallach and Colin Allen mentioned various factors affecting the generation of morality, such as habits inherited from genes, core values discovered through experience, and culturally characterized principles, etc. How to synthesize these factors is obviously a difficult problem for the hybrid approach to think about.
Indeed, even if we are able to design an AI as an artificial moral actor, it does not mean that such an AI can understand what moral behavior is. This is because the AI's behavioral decisions come from its algorithms, not from its free will. We generally think that an action is moral because the actor autonomously chooses to act in accordance with ethical norms out of a multitude of possibilities, motivated by a multitude of factors, such as love, compassion, etc., but as Mark Coeckelbergh puts it, AI does not seem to have love, nor does it have compassion or concern. The more important issue is that designing AI as artificial moral actors does not circumvent all ethical and moral issues. To some extent, this is due to the nature of algorithms; a single algorithm cannot be equipped to handle all ethical scenarios. This means that artificial moral actors still have the possibility of making mistakes, and AI with ethical and moral judgment may still act in a way that is inconsistent with human morality. In this case, we will still face the situation of how to deal with the ethical issues of AI. In other words, the embedded research progression does not completely solve all the AI ethical problems.
3. the normative research approach to AI ethics
The third research approach of AI ethics, namely the normative approach, advocates that the behavior of AI should be regulated through the construction of external norms or various rules and regulations, so as to make it conform to the ethical system of human beings. Many scholars in China support this research approach. In the process of AI practice, this research approach is also supported by many individuals and organizations. It is fair to say that in the case of many controversies between the first and second research approaches, the normative research approach is more realistic. The various ethical codes of AI issued by some countries and organizations are typical representatives of this research approach. However, we cannot deny that this research approach also has many difficulties.
First, the concept of constructing various norms for artificial intelligence is not so clear. It is generally believed that the development of AI needs to adhere to the principle of human-centeredness. Intuitively, this claim is reasonable. The problem is that the vast majority of scholars do not go further to explain whether the “people” in the context of people-centeredness refers to all of humanity, citizens of some countries, some people, or a particular person. In some cases, the development of AI is beneficial to all of humanity, but harmful to some people or individuals. For example, AI needs to collect more data in order to better serve humanity, and in this process, the operation of AI may lead to the loss of personal privacy of certain people and jeopardize their interests.The development of some AI products may even lead to the unemployment of certain people, making them social outcasts. And in some cases, the development of artificial intelligence is likely to benefit individuals and satisfy the selfish desires of certain people, but will harm the overall interests of all mankind. For example, the creation of some AI weapons favors certain war-mongers but can easily harm innocent people. These possibilities suggest that the principle of human-centeredness cannot play a normative role at the conceptual level without a rational explanation of the “human being” itself. More importantly, even if we were able to clarify the various normative concepts needed for the development of AI, there is still the possibility that these concepts might conflict with each other. For example, people are accustomed to interpreting the principle of human-centeredness as taking the overall interests of all human beings as the basis, but the principle of justice requires us to treat every individual equally, and the two are bound to clash in certain ethical scenarios. From this perspective, regulating AI requires clarifying its regulatory concepts, as well as the interrelationship between these concepts.
Second, identifying the party that will formulate the regulations is also a challenge. As a new technology, it is clearly necessary to develop regulations for AI. The question is who we should choose to develop the various regulations on AI. Legal professionals have an in-depth understanding of the various bills, but their knowledge of AI is obviously not as good as that of the designers; enterprises may have a more accurate grasp of the direction of the development of AI, but their profit-seeking tendency makes it impossible for us to be 100% assured of them. Some scholars have suggested regulating the development direction of AI by establishing an ethics committee. This suggestion is undoubtedly worthy of attention, however, the composition mode and operation mode of the ethics committee members still deserve our deep thoughts. We usually tend to think that the members of the AI ethics committee should be some experts and scholars who are familiar with AI. The problem is that the people who are really affected by AI are the ordinary people who know little about AI, and their voices should be taken more seriously. And more importantly, even if we can form an AI ethics committee, in the actual operation process, we will always find that the role of the ethics committee may not be as great as imagined, and it will always be subject to external countervailing forces. Google's AI Ethics Committee was dissolved nine days after its establishment is a clear evidence.
Third, how to formulate reasonable regulations for AI is also a challenge. Generally speaking, the formulation of certain rules and regulations for a certain technology is always premised on a clear understanding of the technology, but AI is special in that its operation process is opaque, not only can no one accurately predict its decision-making, but also no one can explain afterward why it makes such decisions. In other words, AI's mode of operation is opaque and not interpretable. These properties of AI mean that our understanding of AI is unlikely to be clear. From this perspective, it is very difficult to formulate reasonable regulations for AI to govern its behavior.
Another more serious problem facing the normative approach to research is that recognizing the reasonableness of various norms about AI does not mean that AI will necessarily behave in a way that is consistent with our ethical system, because the unpredictability of AI makes it possible to follow the norms of AI while still doing things that are not ethical. In this case, the question of liability arising from AI will cause serious problems for the norm-setters. Because, intuitively, the AI is making decisions in compliance with the norms given by the norm maker, the consequences of its behavior should be borne by the norm maker. But it is clear that this way of attributing responsibility is not consistent with our general understanding of the problem of responsibility. This is because we usually think that only those actors who fulfill these two conditions can be regarded as the subject of responsibility: (1) the actor is the subject of the behavior; and (2) the actor is free, not forced. The parties to the development of AI norms clearly do not satisfy these two conditions; they are not subjects of AI behavior.
4. A New Research Idea: Thinking Based on Moral Particularism
The above discussion shows that the three mainstream research approaches to AI ethics are facing a number of intractable problems. Since AI is unable to obtain an accurate position in the current ethical system, the positional research approach faces the risk of failure; since it is difficult for us to embed appropriate moral algorithms into AI, the embedded research approach also faces many difficulties; the normative research approach is relatively more realistic, but there are many specific problems to be dealt with in this research approach as well. Moreover, all three research approaches have one thing in common, they seem to be inclined to consider all AI products as a whole, expecting to solve all the problems of AI ethics once and for all in some way. The AI locative research approach tries to locate all AI products, the embedded research approach wants to embed ethical algorithms for all AI, and the normative research approach tries to regulate all AI products. However, I doubt that this line of thinking can reasonably address all the ethical issues posed by AI. The reason is that different AI products have different modes of performance, and the ethical problems they pose will vary, especially since some AI products do not generate more serious ethical problems. In this case, treating AI as a unified whole risks erasing the distinctions between different AI products. In my opinion, moral particularism provides a good clue to solve this problem.
Moral particularism can be traced as far back as Aristotle. Aristotle often emphasized that ethics is ultimately concerned with particular cases rather than general theories. This idea has inspired many scholars, with John McDowell and Jonathan Dancy being prime examples. According to Dancy's interpretation, ethical particularism can be understood as “the possibility of moral thought and moral judgment does not depend on the provision of suitable moral principles.” This idea is opposed to ethical generalism, which holds that moral thought and moral judgment depend on suitable moral principles. For quite some time, ethical generalism was dominant in the academy, with utilitarianism and deontology both falling under ethical generalism. Dancy, however, argues that ethical generalism is impossible.
It is commonly held that there are two possibilities for ethical generalism: either it recognizes that there is only one absolute moral principle, or it asserts that there are multiple moral principles that do not conflict with each other. According to Dancy's interpretation, the former conception of moral principles is atomistic, and the latter conception of moral principles is facilitative. It is the latter conception of moral principles that Dancy talks about more. In his view, the former conception is simply wrong. Because morality has many related properties, it cannot be atomistic. Of course, for Dancy, the latter conception is also problematic. Inconsistency among moral principles, the inability to determine the number of moral principles, and the inability to rationally account for morally relevant properties all contribute to the difficulty of the latter conception of moral principles. Dancy is in favor of moral particularism, but he is first talking about particularism in a general sense.
Borrowing McDowell's formulation, Dancy explains the most primitive claim of particularism as “we neither need nor can we regard the search for an evaluable perspective (which one takes to be rational) as a quest for a set of principles.” In Dancy's interpretation, the main argument in favor of particularism is that “the behavior of a reason (or the consideration taken as a reason) cannot be predicted on the basis of its behavior in other scenarios. The way the consideration works here will be, or at least will be, influenced by the other considerations here.” That is, behavior based on a given reason will vary in different scenarios, as will the reasons for which the same behavior occurs; the same consideration that is seen as a reason for in one scenario may be seen as a reason against in another, or even not a reason at all. Dancy calls this view a holistic theory of justification. In his view, there are enough examples in everyday life to show that reasons, both theoretical and practical, are holistic, and that whether or not a given consideration is a reason for an action is also entirely context-dependent.
The holistic theory of justification has many advantages and can explain many phenomena that cannot be explained by atomism, such as some of the cognitive phenomena and practical activities that Dancy often refers to. Within the field of ethics, Dancy argues that the holistic theory of justification also holds. Because moral reasons are the same as non-ethical reasons, they work in roughly the same way.For example, it is generally recognized that keeping a promise is usually a reason to do something and has moral properties, but in some cases it does not constitute a reason to do something and may even be a reason not to do something. From this perspective, the performance of moral acts and the making of moral judgments are related to many factors, which Dancy calls “moral relevance”. Moreover, these moral correlates are irreducible to each other. Dancy's arguments suggest that there is no necessary connection between moral principles and morality. This is because it is impossible to explain all moral phenomena in terms of mutually consistent moral principles.
The above is a brief rundown of moral particularism, which is sufficient for our purposes, although there are many detailed issues that have not been mentioned. If moral particularism is valid, then this will show that the ethical problems caused by AI cannot be explained by moral principles, but can only be analyzed based on the perspective of rationale holism. This is because every ethical issue raised by AI, like other ethical issues in the human social sphere, involves different reasons, and these reasons will play different roles in different scenarios, or may even not work at all in different scenarios. This can be illustrated by the various ethical issues brought about by AI and the various AI-related ethical phenomena that have given rise to extensive discussions. From this perspective, we should be content to analyze the specific ethical problems that AI may bring about from specific scenarios and give appropriate answers to these specific ethical problems.
Of course, strictly speaking, explaining the ethical problems of AI on the basis of moral particularism also needs to be based on a basic principle, namely the principle of fact. It requires us to consider the relevant parties of the problem from all aspects based on the original face of what happened and its real reason for existence in order to solve the ethical problems brought about by AI. It is important to note, however, that dealing with the ethical problems of AI based on the principle of fact within the theoretical framework of moral particularism does not contradict the basic idea of moral particularism. This is because the principle of fact, which is presumably the principle on which all ethical theories need to be based, is different from the ethical norms proposed by various normative ethics, such as the principle of maximization of benefits and the principle of justice. The former is universal, the latter is particularized. Moreover, it is difficult to imagine how to deal with ethical issues in a false context.
In fact, dealing with the ethical issues of AI based on moral particularism also has many advantages.
First, moral particularism is compatible with AI ethics. One of the major reasons why AI brings about a variety of ethical problems is that AI products are diversified in character, and it causes different consequences in every field; even within the same field, different AI products can bring about different ethical problems for different reasons. For example, the copyright issue of AI and the unemployment issue are obviously not the same thing; in the field of copyright, the collection of poems produced by Microsoft's Xiaobing is not the same thing as the ancient poems generated by the “Vivi” Poetry Robot. In other words, artificial intelligence will have different ethical issues due to different products and different application scenarios, which is consistent with the basic position of moral particularism. This is because moral particularism advocates analyzing the ethical problems of AI based on different reasons for different scenarios.
Second, based on moral particularism, we can solve many AI ethical dilemmas that cannot be solved under the framework of moral generalism. Let's take the liability issue of smart driverless cars as an example. Intelligent driverless cars pose a liability problem because it is difficult for us to assign an appropriate responsible party for the liability problem of intelligent driverless cars based on the existing ethical system. However, moral particularism gives a good solution to this problem. According to moral particularism, every responsibility problem posed by smart driverless cars has its specific and different reasons. For example, some problems are brought about by the quality of the car itself, some may be caused by the algorithm, some by pedestrians not obeying traffic rules, and so on. Different liability issues can be dealt with depending on the different factors that led to them. We do not need a uniform, consistent principle to deal with all liability challenges. To some extent, this way of dealing with the problem is consistent with the hybrid scheme in AI liability puzzles. And many scholars believe that the hybrid solution is one of the most promising solutions to the AI liability problem.
Third, solving the ethical and moral problems of AI based on moral particularism is consistent with the living soul of Marxism, i.e., concrete analysis of specific problems, and to a certain extent, it can reflect the unique solution of socialism with Chinese characteristics to the ethical problems of AI. Concrete analysis of specific problems is an important methodology of Marxism, and it is a scientific method that our Party requires leading cadres at all levels to adhere to. It opposes dogmatism and bookism; requires us to clarify the connotation of issues and grasp objective facts; and focuses on grasping issues under specific historical conditions and in specific scenarios. Although moral particularism has a different theoretical background, it is basically the same as the methodology of specific analysis of specific problems in the field of practice. From this perspective, utilizing the basic theory of moral particularism to solve the ethical problems of artificial intelligence can confirm the truth of Marxist methodology.
Of course, solving the ethical problems of AI on the basis of moral particularism may also raise some concerns. Among these concerns, the most important one is caused by the theoretical character of moral particularism itself, that is, the use of moral particularism to solve the ethical problems of artificial intelligence may lead to relativism. It seems to me, however, that this is likely not something we should worry about. It is a fact that moral exceptionalism recognizes the diversity of reasons, and also recognizes that different perspectives will give different reasons; but the point is that these reasons are all part of the holistic vision of reasons, and are all graspable objects of human reason. To some extent, we even think that AI has the potential to help us clarify the moral relativism puzzle. Because AI's ability to collect and process data far exceeds that of human beings, it is able to discover correlations that are not easily found by human beings, helping us to lift the veil of relativism and to recognize the moral intuition and moral consensus that may exist in human beings.
Based on the above reasons, we believe that using moral particularism to think about the ethical issues of AI is a program that deserves our attention. When dealing with the ethical issues of AI, what we need to grasp is not the isolated factors but the interconnections between various factors. Of course, this is not to say that there is no commonality among the ethical issues of AI, but rather that when we consider the ethical issues of AI, we need to consider the many different moral reasons behind each ethical issue.