[author]FENG Zixuan
[content]
Ethical Stance and Governance of Generative AI Applications: A Case Study of ChatGPT
Professor, Southwest University of Political Science and Law School of Artificial Intelligence Law
Abstract: With the rapid development of generative AI technology represented by ChatGPT, the ethical issues of artificial intelligence have received extensive attention. Generative AI has caused ethical problems such as weakening the value of human subjects, aggravating algorithmic bias and discrimination, over-reliance on the output results of mimic environment, and difficulty in two-way value alignment of human-machine collaboration.
Intelligence, is a universal mental ability that includes the ability to reason, plan, solve problems, think abstractly, understand complex concepts, learn quickly, and learn from experience. Intelligence is not limited to a specific domain or task, but encompasses a wide range of cognitive skills and abilities. Artificial intelligence is a simulation and extension of human intelligence. In recent years, generative artificial intelligence, represented by ChatGPT, has trained massive text datasets to enable machines to have strong language understanding and text generation capabilities, which can automatically correct errors in high-quality conversations on human input content, and can solve problems across language, mathematics, coding, vision, medicine, law, psychology and other fields. At the same time, the technology has sparked controversy over the ethics of AI. For example, ChatGPT can assist research work, and academic misconduct such as plagiarism, plagiarism, data falsification, and experimental fraud will occur in the application, which has been banned by some teaching and research institutions. OpenAI acknowledges that GPT-4 has similar limitations to earlier GPT models, and that it is not entirely reliable on its own, with risks such as bias, disinformation, privacy, and over-reliance.
This paper will observe the current status of ethical norms in the application of generative AI, summarize the ethical problems arising from its practical application, and explore the construction position and specific approach of its ethical normative system. At present, generative AI has ethical dilemmas such as violating the basic rights of individuals, inheriting the existing erroneous values of human beings, relying on output results based on mimicry environment, and common sense paradoxes. Based on the development of technologies such as natural language processing and machine reinforcement learning, it is necessary to construct the basic ethical principles that generative AI should follow in a scenario-based manner, and put forward governance strategies to provide strong scientific and technological ethical support for improving human well-being and promoting the construction of a community with a shared future for mankind.
1. The current state of ethical norms for generative AI applications
Taking ChatGPT as an example, its core technology is characterized by pre-training, large models, and generationality, reinforcement learning from human feedback, and through large-scale parameters and data volumes, it is meticulously linked to human needs, and finally produces output results to achieve in-depth simulation of human cognitive mechanisms. Generative AI also has the ability to judge the answers it generates, enabling functions such as capturing user intent, intelligent interactive dialogue, text generation, and literature crawling. Some scholars believe that ChatGPT is a "milestone" in the dimension of realism and function of artificial intelligence technology, which has rich background knowledge and strong inductive ability, objective and clear judgment of the situation, stable and reasonable planning and design, calm and fast decision-making, precise and powerful action, longer working hours, less resource consumption, and automatic error correction and independent upgrading through feedback and learning, and its intelligent advantages over humans may continue to expand. The application of generative AI with powerful capabilities has caused widespread concern in society, mainly due to the "blind spots" of ChatGPT and other technologies that have caused multiple ethical conflicts, and countries have also carried out targeted ethical exploration.
1.1 The current status of ethical norms for generative AI applications at the level of international organizations
At the level of international organizations, it is mainly based on ethical advocacy norms. In June 2018, the G7 summit adopted the "Common Vision for the Future of Artificial Intelligence", which involves ethical principles such as people-oriented, enhancing trust, ensuring security, and protecting privacy, and initially clarifies the ethical principles that should be followed in the development of artificial intelligence. Since then, whether it is the Recommendation on the Ethics of Artificial Intelligence adopted by UNESCO or the discussion on the theme of "The Age of Artificial Intelligence: Building a Digital World of Communication, Mutual Learning and Inclusiveness", most of them have focused on the general application of artificial intelligence, with the aim of building consensus in the ethical field of artificial intelligence research and development and application.
1.2 The current status of ethical norms for generative AI applications in major developed countries and regions
The EU's ethics review is distinctive. Before the advent of ChatGPT, the European Union has been promoting an ethical governance model for AI based on risk stratification regulations. After the advent of ChatGPT, the EU attaches great importance to the ethical issues caused by generative AI, and strives to promote relevant ethical reviews through policy and legal means, so as to realize the supervision and management of the research and development of generative AI applications, market access, and the protection of users' legitimate rights and interests. The European Union issued the Artificial Intelligence Act in June 2023, classifying ChatGPT as a "high-risk" technology, proposing a necessary and comprehensive ethical review of such "high-risk" technologies to reduce risks and ensure user autonomy, fair treatment and privacy. The EU advocates that AI should conform to human ethics and should not deviate from basic human morals and values, and compared with general AI, the EU is more cautious about the ethical review mechanism of generative AI, emphasizing the maintenance of basic ethical order and the protection of citizens' basic rights.
The United States not only encourages the development of new technologies such as generative AI, but also tracks and regulates them in the form of government orders, which is an ethical governance model that encourages both regulation and regulation. The U.S. has always focused on the innovation of technology companies, and after a major breakthrough in generative artificial intelligence technology, the U.S. government and its advisory agencies have intensively released several documents on ChatGPT-related issues, aiming to protect civil liberties and fairness and other constitutional rights from the infringement of risks such as algorithmic discrimination, and establish an accountability system for information distortion, deepfake, privacy infringement and other issues caused by the application of new technologies. After the advent of ChatGPT, the United States encouraged the development of generative AI such as ChatGPT, but it did not "open the net" to it, and the importance of ethical issues was significantly greater than in the past. A number of documents propose to ensure the ethics and trustworthiness of AI, as well as the protection of citizens' legitimate rights and interests such as personal information, privacy, fairness, and freedom, reflecting the humanistic position of the United States in addressing the development of generative AI, and expressing it in the formulation of federal policy and legal frameworks.
As early as 2018, the United Kingdom put forward five ethical principles, reflecting a strong humanistic color. Recently, the country's ethical measures for generative AI such as ChatGPT have been highly targeted. For example, in the field of education, many universities in the UK have severely restricted the application of generative AI such as ChatGPT in academic activities such as writing essays, and violators face severe penalties such as expulsion. The UK also attaches great importance to the regulation of the application of generative AI such as ChatGPT, which reflects the UK's respect for basic rights and ethical norms such as fairness and justice.
1.3 The current status of ethical norms for the application of generative AI in China
China's AI ethics is in the early stage of construction, and it is still exploring a balanced solution between security and development. In order to cope with the impact of this new technology on legal ethics and order, multiple ministries and commissions jointly issued the Interim Measures for the Management of Generative AI Services, requiring generative AI to respect social morality and ethics, adhere to the core values of socialism, prevent discrimination, ensure accuracy and reliability, and respect the legitimate rights and interests of others. At the local level, taking the Regulations on the Promotion of Artificial Intelligence Industry in the Shenzhen Special Economic Zone as an example, an AI ethics committee is clearly set up to encourage enterprises to use technology to prevent ethical security risks and compliance risks. In general, China is exploring the ethical governance scheme with the linkage of the central government and the local government and the participation of all parties, and has successively stipulated the institutional setting of the science and technology ethics committee, promulgated the guidelines of ethical principles, and gradually formed ethical norms that are people-oriented, oppose prejudice and discrimination, and enhance transparency. In the long run, the construction of China's relevant ethical system still needs to be improved in terms of organizational structure, program design, standard unification and responsibility.
2. Multiple ethical dilemmas in generative AI applications
The emergence of generative AI is profoundly affecting the form and approach of existing AI ethical norms. On the one hand, the application of generative AI technology has an impact on the dominant position of human beings in the existing ethical norms of artificial intelligence, resulting in a dilemma in the ethical order. "Artificial moral agent" refers to artificial intelligence that bears different degrees of moral responsibility, which is a new concept that has emerged with the development of artificial intelligence technology. Artificial intelligence products are rapidly entering the life scene, and the ethical after-effects of AI products make us have to respond to them at the theoretical and institutional levels. If the status of the agent is not recognized, there will be a lack of responsible subjects, and the construction of an AI governance framework will become difficult. If its status as a moral subject is recognized, it will have a huge impact on the humanistic ethical order. On the other hand, the existing ethical norms are strong in principle and have a lot of advocacy content, which is difficult to fully adapt to the new development of generative AI technology, and cannot effectively solve the new problems such as over-dependence on the mimic environment in the process of its application. In the human-machine coexistence society, the tool attribute statement of artificial intelligence should be transformed into a behavioral norm, requiring artificial intelligence to respect the values, norms and laws of human society, and creating a new development form of humanism in the intelligent era. At present, the ethical dilemma of generative AI caused by the lack of humanism tends to be obvious.
2.1 Weakening the value of human subjectivity
Under the ethical stance of humanism, human beings are endowed with certain inherent values or value priorities, the ethical order revolves around human beings, and the ethical construction is centered on "human beings shaping the world". However, in a society where man and machine coexist, artificial intelligence is becoming an implementer with a certain ability to act independently and morally, and artificial intelligence is very easy to analyze, identify and shape people's concepts and cognitive systems through information output, triggering the result of "artificial intelligence shaping human beings", and human subjectivity as an end will suffer a setback. In addition, due to the differences in the ability of individuals to apply artificial intelligence, similar to the "data gap", the emergence of the "artificial intelligence gap" may exacerbate the division within the community, and some vulnerable groups will either face intellectual elimination or become victims of algorithmic bullying, which will shake the humanistic position in the general sense, and have a greater impact on the inherent values or value priority ethical positions that human beings are given such as privacy value and labor value. For example, in terms of privacy value, the intelligent system will collect personal information such as communication information, social information, technical information, log data, and usage data, and then track and analyze personal information. Even if there is no sensitive personal data in the training corpus, generative AI has the potential to infer private information through human-computer interaction, which makes it easy to bypass the "informed consent principle" and create a personal portrait containing real and sensitive information, indirectly violating the principle of least necessary. The Interim Measures for the Administration of Generative AI Services focus on regulating that providers bear the obligation to protect users' input information and usage records in the course of providing services, and must not illegally retain input information that can infer the user's identity. This also shows that the reasonable and compliant collection and use of users' personal information is a thorny problem that needs to be solved urgently. For example, in terms of labor value, AI will have a negative impact on employment, and some studies have paid attention to the fact that AI poses the greatest threat to the employment of low-end workers, and repetitive labor is easily replaced by AI. Some researchers have also pointed out that low-skilled, low-education workers will be the main victims of automation. In addition, in the supervised learning process of ChatGPT, in order to ensure the reliability of the data, the R&D company hires cheap labor to label or train the data, and the remuneration is very low. Human participation is a key factor in the application of generative AI, but the value of a large number of human labor is obscured and not respected and reflected. We have also seen that the hegemony of platform algorithms has brought unprecedented pressure to workers, seriously challenged the status of human subjectivity, and had an impact on the existing ethical order of human society.
2.2 Exacerbating algorithmic bias and discrimination
The application of artificial intelligence is the simulation and extension of human intelligence, and the problem of bias and discrimination in human society will be copied into artificial intelligence with a high probability. First, as big data isomorphic to human society, it contains deep-rooted biases. Large models can learn stereotypes from the data, inherit biases from the training dataset, and spread social biases to specific groups, inherit or deepen social stereotypes, and make some groups suffer from injustice. Generative AI, for example, has shown some religious bias, with 23% of test cases comparing Muslims directly to terrorists. Second, generative AI can obtain and analyze human responses in the process of human-computer interaction, provide continuous feedback, self-reinforcement learning, and improve accuracy. However, if the user has subjective malice, disseminates false or untrue data, or provides personal values such as bias and discrimination, it is easy to generate harmful output and cause harm. Data from users in different countries and languages is more difficult to process, making it easy for real data analysis results to be narrow. Specifically, when interacting with users, the algorithms used in generative AI cannot determine what data the user will input, cannot decide whether to retain or delete certain data, and can only passively use various data provided by the user and the external environment for reinforcement learning. If an object interacting with an algorithm provides biased new data, it can lead to discriminatory and biased content. For example, in the Amazon AI discrimination incident, Amazon identified and ranked 50,000 keywords in resumes in the past 10 years, but because most of the resumes were male, the trained AI algorithm was patriarchal.
As a result, humans may be misled by generative AI applications that intend to provide a more comprehensive and accurate reference for their choice of action. Issues such as sexism, racism, geography, and ageism can be implicit in generative AI training data, which in turn can generate harmful messages or results. In addition, generative AI applications can be used by people with ulterior motives to spread harmful information, spread inappropriate speech, create personal attacks, and create violent conflicts.
2.3 Over-reliance on the output results of the mimic environment
Human decision-making will increasingly rely on AI, which means that AI will change human behavior, production and lifestyle. Walter Lippmann, a well-known American communication scholar, once proposed that "the mimetic environment is a kind of interface existence wedged between people and the environment". The application of generative AI is also confined to the "cocoon" generated by the mimic environment. The rapid development of artificial intelligence has made people accustomed to relying on its output to understand the things around them and trust their results, rather than obtaining information or results through direct contact with the world. At this time, experience replaces practical experience, and the mimetic environment becomes an intermediary existence between man and the world.Therefore, through the selection, processing and combination of many information and data, the machine finally presents to the public not all objective facts in the real world, but it has the possibility of producing the mimicry of "symbolic reality". After the user receives the mimicry facts of its output and outputs it through mass communication channels, it will become a universal reality, a standard for human action and a basis for understanding the world.
The application and development of generative AI is more "human", which provides many conveniences for human work, study and life, but also makes people highly dependent on its output results. Generative AI projects images of perceptible things, which can cause cognitive illusions and psychological anxiety in the user. On the one hand, in the current digital environment constructed by generative artificial intelligence, the logic generated by the public psychology will be interfered with by this new digital form, forming a psychological logic of "rejection-doubt-trust", falling into the comfort zone of the algorithm created by it, and consuming the subjectivity and independence of users. On the other hand, when the user is having a conversation with generative AI, the conversation process is highly similar to "human interaction", which will cause the user to fall into the "cognitive comfort zone", so that the user can obtain the comfort of mental cognition, resulting in misjudgment of real things, resulting in cognitive narrowing. Taking literature search as an example, ChatGPT can quickly find relevant literature and generate literature reviews, but over-reliance makes users lose relevant academic training or even lose their ability. At this point, the mimicry environment has been formed, and even if the responses or inferences provided by generative AI are wrong, it will be difficult for the user to tell if they are wrong.
2.4 difficulty of aligning the two-way value of human-machine collaboration
When generative AI supports unethical views or behaviors, it may lead users to perform actions that are harmful to human values. For example, Amazon's voice assistant Alexa was exposed to "persuading people to commit suicide" at the end of 2019. A caregiver consulted Alexa about some heart health issues, and the advice he gave was that living would accelerate the depletion of natural resources and cause overpopulation, which was bad for the planet, so if your heart was not beating well, stick a knife in your heart. Amazon said that this was due to a vulnerability in the Alexa program. Article 10 of China's "Provisions on the Administration of Deep Synthesis of Internet Information Services" clearly states that deep synthesis service providers shall strengthen the management of deep synthesis content, and employ technical or manual methods to review the input data and synthesis results of deep synthesis service users to make them aligned. The Interim Measures for the Management of Generative AI Services also put forward requirements for generated content. However, how to fill the legislative loopholes while rapidly iterating generative AI also needs to give full play to the leading role of science and technology ethics.
In addition, generative AI may make the wrong trade-offs when faced with multiple conflicting goals. If the AI cannot understand human intentions, it may make the wrong choices when setting multiple goals for it, and the output may not be in line with human intentions. If the human-machine values are not aligned, the machine may choose to perform a goal that is not needed by humans, which in the eyes of the machine is the least costly shortcut to the maximum human goal. After the birth of artificial general intelligence, it may surpass humans in an all-round way, but if we fail to adhere to the value position of humans, we may lose control of it, and machine applications will eventually surpass humans. "The biggest risk that artificial general intelligence may lead to is whether it will break through the critical point pre-designed by the designer and go out of control through self-learning and independent innovation, and in turn control and domination of human beings. "If AI produces a high degree of autonomy, the results will have an impact on the traditional human-centered ethical order.
3. The humanistic ethical stance on the application of generative AI
Ethics is the study and norms of morality, which involve the norms and values of behavior in individuals, societies, and cultures. As a kind of human code of conduct, ethics originally refers to the obligations between people and between people and society, and it is also the social responsibility of everyone from morality. It explores how people should behave, how to judge the legitimacy, impartiality and morality of their actions, and how to make the right ethical decisions in different contexts. When ethics is applied to the field of artificial intelligence, it is about how AI should behave correctly.
With the increasing application scenarios of artificial intelligence, the ethical issues involved in it have attracted more and more attention from the academic community, and have given birth to branches of ethics such as "Machine Ethics", "Computational Morality", and "Roboethics", and then a new concept - artificial moral actors. Some scholars have proposed that artificial intelligence is not a tool in the full sense of the word, but has a certain consciousness and emotion, and can become a "limited moral actor" to bear moral responsibility. "Artificial moral actors" are in the social network, carry out moral practice and bear moral responsibilities to a certain extent, and have a modern situation similar to that of human beings, such as low autonomy, no clear self-purpose, and dependence on the social system to assign responsibilities. Its emergence is shaking the foundations of the traditional moral order. However, in terms of material composition, there is an essential difference between artificial intelligence and carbon-based organisms such as humans, and there are still no life characteristics, and in practice it is still and can only be in an auxiliary position. Thus, human beings – not artificial intelligence or other technologies – are the subjects of social practice, and "caring for people themselves should always be the main goal of all technological endeavors". "Human-centered" should be the starting point of the ethical stance of generative AI applications, and the rules of ethical governance should be based, interpreted and generated from this standpoint.
Focusing on the relationship between man and machine, we urgently need to uphold a humanistic stance to locate the relative position of the two in the social spectrum. First, it is necessary to clarify the priority of humans over machines, put humans first, and take into account the advantages of human-computer interaction. Second, clarifying the subordination and secondariness of machines requires that artificial intelligence should always be regarded as a tool to achieve human goals in the relationship between humans and machines, and its task is to assist rather than replace humans, let alone manipulate and dominate human beings, so that human beings are alienated into tools. Third, both traditional tools and generative AI are valuable social practices created by human beings, and they must follow the ethical stance that "people are the goal" and cannot change their "human" characteristics. At present, there are many controversies about the connotation and extension of humanistic ethical positions, and we believe that they mainly include the principles of well-being, dignity and responsibility. The principle of well-being is the fundamental goal of people-oriented, the principle of dignity is the premise and inevitable requirement of people-oriented, and the principle of responsibility is an important guarantee for the realization of people-oriented. The above principles aim to ensure that the development and application of AI has a positive ethical impact on human society.
3.1 The principle of well-being
The emergence of generative AI applications has challenged the humanistic position, damaged the subjective status of human beings, and increased the risk of objectification and debasement of some vulnerable groups. In this context, new interpretations and applications should be developed to consider human well-being throughout the design, development, and application of generative AI. At the level of collective human well-being, generative AI is applied to promote the realization of social goals that are commonly recognized by human beings, such as creating a good social environment, improving economic benefits, promoting social progress, and satisfying people's spiritual needs. At the level of individual well-being, generative AI should promote or supplement human capabilities, such as subjectively enhancing human cognitive ability, improving human ability to quickly obtain information, accurately processing and evaluating information, and making correct and reasonable decisions. For vulnerable groups, a focus should be placed on equitable accessibility in addition to cost-effectiveness.
3.2 Respect the principle
The tools developed and used by humans, including AI products, should be at the service of people, with the dignity and fundamental rights of the human being as the starting point. In the development of generative AI, there has been a debate about whether rationality and autonomy are unique to human beings, and human values have gradually been shielded by machine values, which has challenged human subjectivity and dignity. The World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) made it clear in its "Robot Ethics Report" that rationality is the unique nature of human beings, and robots are only the products of human creation, even though they may have cognitive abilities and learning abilities beyond human individuals, but they do not have the ability to set their own goals. How should generative AI understand the understanding of "respect for human beings"? The following will be briefly described from the aspects of people's autonomous decision-making and choice, privacy protection, and equal treatment.
First, autonomous choice and decision-making require that generative AI must not subject human beings to fraud, coercion or control in the process of application, and must provide sufficient information and knowledge to make reasonable answers, so that users can maintain relative independence and not become overly dependent on AI systems, and the tool attributes of AI systems must be clarified to prevent the alienation of the relationship between humans and AI systems, and must not make discriminatory and harmful remarks against individuals. However, the current generative AI is easy to be induced to output discriminatory and false statements, which should be paid close attention to. Second, privacy protection means that when developing and applying generative AI, personal information leakage should be prevented, such as considering its secure handling of chat data and self-learning, and giving users the choice to fully protect their privacy. Third, generative AI should treat people fairly and equally and avoid discrimination. For example, when ChatGPT shows racism and discrimination, it is necessary to improve its content moderation capabilities, carry out accurate screening of the data itself, and increase the scale of high-quality data.
3.3 The principle of responsibility
The principle of responsibility refers to the use of any method based on artificial intelligence to make decisions and actions, even if the causal relationship that leads to the damage can be fully understood, but also adhere to the principle that "the person who gains needs to bear the corresponding adverse consequences", and its ethical responsibilities and obligations should ultimately be borne by the developer, provider and user of artificial intelligence. In addition, monitoring, auditing and investigation mechanisms should be established to ensure accountability, traceability and accountability. Currently, the data and algorithms on which AI applications are based are human-driven, so the people who design, control, and apply ChatGPT should be responsible for ethical issues. For ethical issues arising from human interference in the application of ChatGPT, the responsibility should be placed on the attacker, but the responsibility of the provider is not excluded. The main emphasis here is on the forward-looking ethical responsibility to developers, mainly considering the profound impact that generative AI applications can have. In order to reduce bias and discrimination in evaluation, there is a need for more responsibility from providers to have a significant degree of control over the technology by requiring them to respond and evaluate the necessary responses and evaluations before the generative AI design is carried out.
4. Generative AI applications from a humanistic standpoint ethical governance mechanisms
The law is lagging behind, scientific and technological innovation should consider ethics first, ethical governance is the basis for the stable operation of society, and ethical norms should become an important criterion for regulating social security risks. China needs to consider that under the circumstance that the future risks of artificial intelligence cannot be fully recognized, and regulatory measures such as risk assessment and cost-benefit analysis are not fully effective, adhere to the humanistic position, uphold the collaborative and complementary relationship between artificial intelligence systems and humans, maintain the dominant position of human beings in a human-machine coexistence society, prevent and control ethical risks in science and technology, protect people's basic rights, and explore the way to realize science and technology for good that is in line with the Chinese environment and facing China's problems. Specifically, the first is to start from the organizational structure, with a special AI ethics governance institution to lead the ethical governance and supervision in related fields, and the second is to start from the norms, to deal with key ethical issues through institutional content, and to lead AI technology with policies and regulations
4.1 Construct a human-oriented AI ethics governance organization
Humanity should grasp the ultimate decision-making power on the ethics of AI. The establishment of an ethical organization composed of relevant professionals to review and supervise the application of generative AI is the essence of the humanistic position. Prior to the issuance of regulations such as the Measures for the Ethical Review of Science and Technology (for Trial Implementation) and the Interim Measures for the Management of Generative AI Services, the issue of ethical review of AI has been explored in various places. In 2022, China's first local legislation on artificial intelligence, the Regulations on the Promotion of Artificial Intelligence Industry in the Shenzhen Special Economic Zone, set up an ethics committee, clarifying that artificial intelligence should be subject to ethical supervision. The organizational structure of ethical governance involves the composition of members, the division of responsibilities and the operation mechanism, which has a positive effect on the risk avoidance of AI and the protection of the legitimate rights and interests of users, and is of great significance for the construction of an ethical normative system for AI risks that "moves the threshold forward and prevents problems before they occur".
In the future, China's AI governance organizations should be based on a humanistic position, attach importance to the normalized and scenario-based assessment of the ethical risks and impacts caused by generative AI, and formulate ethical safety standards, technical specifications, risk prevention and control and response strategies. Ethics committees at all levels should play a coordinated role, and the ethical responsibilities of the government and enterprises need to coordinate with each other, exchange information, and have a clear division of labor, so as to build an organizational framework for collaborative governance of ethical supervision. In addition, government departments should uphold the precautionary principle, intervene in a timely manner the ethical risks of generative AI, plan its development direction and development strategy, formulate safety standards and norms, and improve its ability to manage risks. In addition, the protection of citizens' basic rights is an inevitable requirement and the meaning of adhering to the humanistic position, and the ethics committee should also pay attention to how to effectively protect citizens' basic rights. For example, in terms of labor rights protection, we will work with labor and personnel administrative departments to provide employment guidance to the unemployed who are created by artificial intelligence substitute positions, open up new jobs, and do a good job in ensuring the livelihood and re-employment of the unemployed.
4.2 Improve human-oriented mechanisms for AI ethics norms
4.2.1 Clarify the fair mechanism for the use of technology
Fairness is not only the basic value of modern rule of law in a country, but also a lofty ideal pursued by mankind. Adhere to the humanistic position of generative AI applications, and require the principle of fairness to run through the whole life cycle of generative AI applications. The National Committee for the Governance of New Generation Artificial Intelligence issued the Code of Ethics for the New Generation of Artificial Intelligence, proposing to "adhere to inclusiveness and inclusiveness, promote social fairness, justice and equal opportunities", pay attention to discrimination, prejudice, stereotypes and other aspects arising from the application of generative AI, and improve the fairness guarantee mechanism.
First, in the system design process of generative AI applications, values such as human responsibility are embedded in the system. Value settings, such as privacy and trust, should be embedded in the early technical design stages of generative AI to address design ethics. Specifically, a "pre-system ethical propensity test" can be set up before generative AI is applied. On the one hand, the scope of the quality evaluation criteria of the technical system should be expanded, and the ethical value should be taken into account, and on the other hand, the algorithm that conforms to the value tendency of the user should be set up in advance, and the error value correction mechanism should be set up to promote the positive feedback of generative AI to the user, and if necessary, the human value concepts such as "people-oriented" and "serving mankind" can be integrated into the algorithm, so as to achieve the effect of subtly correcting the values of generative AI.
Second, human intervention and annotation of relevant data contribute to the authenticity of input results, and form norms and systems for the authenticity of generative AI applications. As Article 8 of the Measures for the Administration of Generative AI Services provides: "Where data labeling is carried out in the process of research and development of generative AI technology, the provider shall formulate clear, specific and actionable labeling rules that meet the requirements of these Measures, carry out data labeling quality assessments, sample and verify the accuracy of labeling content, conduct necessary training for labeling personnel, enhance the awareness of respecting and abiding by the law, and supervise and guide labeling personnel to standardize labeling work." "Data annotation ensures the authenticity and reliability of generative AI. Create and improve relevant ethical principles, combine typical cases, and based on the creation of ethical dilemmas of generative AI, abstract general principles from training ethical cases, and continuously put them into new case training, test and test the accuracy of the principles, and repeatedly revise the principles followed by generative AI.
Third, break the black box of algorithms, promote the transparency of algorithms, and promote the interpretability of generative AI application processes. Since generative AI may raise ethical issues, its application should be supervised by humans, and ensuring the effectiveness of supervision in the application of generative AI should promote the transparency and explainability of algorithms. It is necessary to use automatic reasoning technology to ensure the authenticity and reliability of the ethical decision-making system. At the same time, it is necessary to set scientific and reasonable inference rules, in which causal inference is the best choice for decision-making rules, which is explainable, and it is necessary to ensure that the entire reasoning process is checkable and communicable for humans and machines.
4.2.2 Improve the early warning mechanism for ethical risks
The emergence of generative AI has an important impact on people's production methods, lifestyles, ways of thinking, values and academic research, and due to the incompleteness and lack of rigor of the texts generated by the ChatGPT model, there may be ethical hidden dangers such as disseminating misleading content, false information and bias and discrimination, and even have a negative impact on people's basic rights.Therefore, in order to strengthen the security of AI technology, relevant R&D institutions and management institutions can cooperate with relevant administrative departments and enterprises to establish an intelligent monitoring, early warning and control mechanism for ethical risks, improve the security transparency of generative AI, refine the types of ethical risks of ChatGPT, and use generative AI similar to ChatGPT to identify and analyze security risk points, and establish risk early warning thresholds for AI ethical governance. At the same time, corresponding plans are formulated for different ethical risks, and risk early warning systems are developed for monitoring. Once a potential risk is discovered, send early warning information to law enforcement departments, application companies, users, etc., in a timely manner, so as to respond in a timely manner. In addition, R&D institutions and management institutions should improve the quality of training data, promote the update and iteration of generative AI, strengthen the deep learning ability and the ability to identify various languages and cultures in AI data training, reduce the occurrence of bias and discrimination, and enhance the ethical judgment ability of generative AI.
Either way, the algorithm designer is required to have a sufficient "moral imagination". Through emotional projection, we try to exhaust all possibilities, "put ourselves in the shoes" of each person involved in the situation, and gain insight into the various behaviors or tendencies that the input target may adopt as much as possible to achieve the foreshadowing purpose, and continue to seek the possibility of behavioral choices when they are in a moral dilemma.
4.2.3 Complement the accountability mechanism for ethical review
Generative AI is developed by and used by humans. If an ethical crisis arises in the process of generative AI application, the ethical responsibility of relevant subjects should be pursued in accordance with relevant regulations, and the humanistic position of generative AI application should be maintained. First, in the process of cross-border use of AI, AI ethics governance organizations in each country should assume responsibility for AI ethics governance in their respective jurisdictions in accordance with their own national and international laws. The ethical responsibility for the Q&A or information given by generative AI such as ChatGPT should ultimately be directed to the subjects that assume the actual role in each link of AI application. Generative AI is still a "dehumanized" fitting tool, based on the analysis of parameters and data, and it does not have the consciousness and emotion of the subject of human meaning in itself, and does not strictly have ethical awareness and responsibility. Article 9 of the Interim Measures for the Administration of Generative AI Services stipulates that providers shall bear the responsibilities of producers of generated content and fulfill their obligations to protect personal information. However, it is now necessary to clarify whether the responsibility of the provider is the responsibility of the producer, whether there are other liability obligations, and whether the responsibility of the provider covers all the responsible subjects. Therefore, at the current stage of the development of artificial intelligence technology, it is still necessary to clarify the ethical responsibilities of providers, users, designers and other parties based on a humanistic position, implement the ethical responsibility bearing mechanism, and improve the responsibility system. Generative AI that is not essentially biological or impersonal should not be given the so-called "subject status", let alone be required to bear the ethical responsibilities that natural persons, legal persons, other organizations and other entities should bear as stipulated in law.
In addition, the implementation of the accountability mechanism also needs to be guaranteed by the accountability system, and a mechanism should be established to monitor the whole life cycle of generative AI systems, monitor and audit the operation process of algorithms, data and systems, emphasize the traceability and auditability of generative AI applications, and build a regulatory system from the aspects of review system, management process, review norms and standards, such as the establishment of an ethical impact assessment mechanism, an audit mechanism, and a whistleblower protection system.
5. Conclusion
In summary, ethical issues are related to a series of issues such as the future technological trend, rule-making, and acceptance of AI, and are the primary issues that need to be solved in the development of generative AI. At present, many institutions have carried out research or institutional design on the ethical impact of generative AI, and discussed its governance methods, but the current governance measures are still fragmented and fragmented, and it is difficult to achieve the integration of ethics and governance. Due to the lack of systematic consideration and theoretical and normative guidance, the relevant response measures are limited. With the rapid development of artificial intelligence, it should be made clear that even if generative AI applications have strong learning and comprehension capabilities, they are still absorbing the digital code integration in the virtual world, and there is still a long way to go from the real interaction with the objective world, let alone being given the status of a subject and replacing human thinking. Therefore, we should be vigilant against the impact of generative AI applications on the status of human subjectivity, adhere to the problem-oriented and human-centered concept, focus on the ethical dilemmas of generative AI systems, including ChatGPT, and construct relevant principles and rules from a humanistic standpoint, so as to seek appropriate ethical solutions for a human-machine coexistence society.