Location : Home > Resource > Paper > Theoretical Deduction
Resource
Liu Yanhong | The Three Major Security Risks and Legal Regulations of Generative Artificial Intelligence: Taking ChatGPT as an Example
2023-08-14 [author] Liu Yanhong preview:

[author]Liu Yanhong

[content]

The Three Major Security Risks and Legal Regulations of Generative Artificial Intelligence: Taking ChatGPT as an Example



*Author Liu Yanhong
Professor, School of Criminal Justice, China University of Political Science and Law


 School of Criminal Justice, China University of Political Science and Law

Abstract: The emergence of ChatGPT signifies a new momentum in the development of artificial intelligence. According to the operation mechanism of generative artificial intelligence, i.e. preparation, computation to generation, three security risks can be identified. Regarding the data risks during the preparatory phase, it is necessary to coordinate the use of national data based on a holistic view of national security, specifically, to ensure the authenticity of generated conclusions, government data should be supervised in compliance with regulations, and the compliance should be limited to personal data collection, while minimal proportionality principle be observed. As for the algorithmic characteristics of generative artificial intelligence and its biases during the computational phase, a combination of technology and management is indispensable, namely, to improve the technical standards to carry out substantive reviews and establish an automated, eco-logical, and end-to-end dynamic regulatory system. Considering the intellectual property risk in the generative stage, in view of the unique attributes of the AIGC, it is crucial to reshape the protection model based on the Interpretability Theory, that is, to reaffirm that the protection target lies on the interpretable algorithms and the AICC, and to establish a comprehensive intellectual property compliance protection system. As for other potential legal risks that it may pose in the future, a preventive approach based on Risk Preventive Principle should be adopted to maximize the technical efficiency while minimize the negative impacts on society caused by emerging technologies.

Key words: ChatGPT; generative artificial intelligence; compliance system; data security; algorithmic biases; intellectual property


On April 11, 2023, the State Internet Information Office of China's "Management Measures for Generative Artificial Intelligence Services (Draft for Comments)" stipulated that generative artificial intelligence "refers to the technology of generating text, images, sound, video, code, and other content based on algorithms, models, and rules. In November 2022, artificial intelligence company OpenAI launched generative artificial intelligence and named it ChatGPT. Generative artificial intelligence, represented by ChatGPT, is one of the ultimate forms of metaverse technology architecture. Its emergence has advanced the implementation of the metaverse by at least 10 years, and the metaverse itself provides a good technological operating environment for such generative artificial intelligence. Driven by generative artificial intelligence technology, the concept of the metaverse not only did not fade due to the emergence of ChatGPT, but also gained new development momentum. Especially with the introduction of GPT4, which can understand, speak, and interact, various industries in society have suffered varying degrees of impact. Compared to previous artificial intelligence technologies, generative artificial intelligence such as ChatGPT poses real and urgent potential risks to human society.

If the Internet has triggered a space revolution and smartphones have triggered a time revolution, ChatGPT technology is triggering a knowledge revolution in human society. Elon Musk praised it as no less than the iPhone, Bill Gates said it was no less than reinventing the Internet, and Zhou Hongyi believed it could be compared to the inventions of the steam engine and electricity. Compared to existing artificial intelligence technologies, the phenomenal rise of ChatGPT technology is due to the substantial leap in technical performance shaped by large language models and generative artificial intelligence architectures. The "Large Language Models (LLMs)" possessed by ChatGPT represent significant technological progress in the field of deep synthesis in artificial intelligence technology. The "emergence" capability supported by massive data and powerful computing power enables ChatGPT technology to not only "understand" human natural language and "remember" a large number of facts obtained during training, but also generate high-quality content based on "remembered" knowledge. Good interactivity, high universality, and intelligent generation are accelerating the formation of a more rigid, high-frequency, ubiquitous, and profound connection between ChatGPT technology and human society. Correspondingly, this also means that the potential risks that ChatGPT technology brings to human society are more realistic and urgent compared to existing artificial intelligence technologies. Geoffrey Hinton, the father of deep learning, believes when it comes to ChatGPT technology: "Most people believe that this (AI harm) is still far away. I used to think it was still far away, maybe 30 to 50 years or even longer. But obviously, I don't think so now." In this context, analyzing the potential risks of generative artificial intelligence and proposing legal governance paths is not a "perceptual fantasy" in the sense of science fiction, It is a rational thinking based on reality. Therefore, how to combine the operational mechanism of generative artificial intelligence with security risks for legal regulation has become a common topic of concern in the current technological, industrial, and legal communities.

Analyzing the operational mechanism of generative artificial intelligence, the process of drawing intelligent conclusions is actually divided into three stages, namely the preparation stage for pre learning training and manual annotation assisted algorithm upgrading, the calculation stage for processing input data and obtaining processed data output, and the generation stage where data output flows into society and has an impact on various industries in society. Based on this, the security risks that currently require legal regulation in generative artificial intelligence are data security risks in the preparation stage, algorithm bias risks in the calculation stage, and intellectual property risks in the generation stage. Analyze and regulate the three major security risks caused by generative artificial intelligence represented by ChatGPT, in order to curb the negative impact of generative artificial intelligence in the process of technological development, and take preventive measures based on its technical characteristics, in order to provide legal protection in the development process of emerging artificial intelligence technology and eliminate technical hazards for shaping a good ecosystem of the future metaverse.


1 Strong Foundation Empowerment: Data Security Risks and Compliance Disposal in the Preparation Stage of Generative Artificial Intelligence


As a generative artificial intelligence, ChatGPT must debug its own utilization mode and protection method of data in the basic preparation stage, differentiate and treat it according to the differences in data types, and extract information and predict trends through data learning. Therefore, data security risk is the first major risk of generative artificial intelligence. In fact, the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" stipulates relevant regulations on data training in Article 7, which is worthy of recognition. During the preparation stage, efforts were made to identify and curb the data security risks that generative artificial intelligence may cause, but it still needs to further refine specific regulatory measures at the normative level. In other words, properly handling the risks faced by data classification during the preparation stage is the foundation for strengthening the subsequent operation and processing capabilities of generative artificial intelligence. By properly disposing of data, generative artificial intelligence systems are given new development momentum and play a legal risk prevention function.


1.1 Classification of Data Security Risks in Generative Artificial Intelligence: Taking ChatGPT as an Example

The operation of generative artificial intelligence cannot be separated from algorithms and data. In the face of highly intelligent generative artificial intelligence like ChatGPT, how to properly apply and process data has become an important indicator to measure the security of such emerging technologies and regulate their subsequent applications.

At present, China's legislative, judicial, and law enforcement agencies attach great importance to the analysis and prevention of data risks. After the rise of artificial intelligence technology, they have successively introduced the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)", the Data Security Law, the Network Security Law, the Personal Information Protection Law Legal norms such as the "Measures for Security Assessment of Data Outbound", "Regulations on Deep Synthesis Management of Internet Information Services", and "Contract Measures for Personal Information Outbound Standards" regulate artificial intelligence application data from multiple aspects. In the practical application process of generative artificial intelligence represented by ChatGPT, according to the specific application scenarios of data, it can be divided into national data related to national security, government data integrated in the process of serving citizens, and personal data with close relationships between citizens themselves. These three types of data will face different types of data security risks in the application process, and need to be analyzed in conjunction with the scenario itself.

1.1.1 ChatGPT faces national security risks when applying national data

On October 16, 2022, the report of the 20th National Congress of the CPC pointed out that "we must unswervingly implement the overall national security concept." The overall national security concept represents the country's greater emphasis on strengthening the protection of national security from the top-level design, and data security is an integral part of the overall national security concept. Under the guidance of the overall national security concept, Article 4 of the Data Security Law stipulates that "to maintain data security, we should adhere to the overall national security concept, establish a sound data security governance system, and improve data security guarantee capabilities." This proposes a data security institutional model that includes basic elements of data and basic subsystems of data, in order to achieve full lifecycle protection of data.

The potential security risks posed by ChatGPT to national security are due to its technical framework originating from outside the region, mainly based on Western values and mindset. Therefore, the answers often cater to Western positions and preferences, which may lead to ideological infiltration and inherent value bias in the collection and processing of some data, making it easy to conduct in-depth analysis and mining of data related to the country, This will affect China's digital sovereignty and data security. Modern digital technology has driven economic and political globalization through the convergence with capital, and in this process, new forms of hegemony have emerged. This new form of hegemony may affect digital sovereignty in a different direction from the past, and further affect national security by infiltrating data security. In fact, the good operation of ChatGPT cannot be separated from the support of massive data, and its highly intelligent characteristics will encourage it to spontaneously collect and process relevant data, including relevant data that has been integrated and published by the country and relevant data that has not been integrated and published, which may be collected and deeply processed by it as data support for drawing conclusions. Article 8 of the "Data Exit Security Assessment Measures" stipulates that "the focus of data exit security assessment is to assess the potential risks that data exit activities may bring to national security." For the security risks that the introduction of ChatGPT may bring to national data, classified supervision should be carried out based on the overall national security concept, and national security data should be vertically sorted out in a hierarchical manner, In order to standardize the collection and application process of national data by emerging artificial intelligence such as ChatGPT, and attempt to build an active defense system for data passive exit, especially by building and strengthening a network attack monitoring platform to focus on protecting national data.

1.1.2 ChatGPT faces administrative regulatory risks when applying government data

On February 13, 2023, the Beijing Municipal Bureau of Economy and Information Technology released the "2022 White Paper on the Development of Beijing's Artificial Intelligence Industry", which stated that "we will comprehensively consolidate the foundation of the development of the artificial intelligence industry. We will support leading enterprises in building benchmark ChatGPT big models, and focus on building an application ecosystem of open source frameworks and universal big models. We will strengthen the layout of artificial intelligence computing infrastructure and accelerate the supply of basic data for artificial intelligence, This means that the government is gradually paying attention to the construction of local artificial intelligence systems similar to ChatGPT. Effectively promoting digital government governance is an important part of modernizing China's national governance system and governance capabilities in the new era. In the construction process of digital government, ChatGPT will affect the specific process of digital government construction, and there is a possibility of triggering administrative regulatory risks for the acquisition and use of government data. The overall trend of current government work is gradually shifting towards digital platforms. People need to use information tools to participate in digital administration, prevent abuse of administrative power, and protect citizens' rights and public interests. This is an essential aspect of digital government construction. With the construction of digital government, whether it is government processing processes or administrative law enforcement processes, government data is the core productivity of digital government. Especially in the process of building a digital case library through large sample data collection and analysis, big data technology will summarize legal experience, predict the frequency of illegal behavior, the magnitude of harm consequences, and the level of legal effects, Ensure the objectivity of the transmission of discretionary benchmark texts and the predictability of reference results, and these government data may become the target of ChatGPT.

During the operation of ChatGPT, in order to obtain relatively accurate conclusions through the optimal solution of the algorithm, it is inevitable to collect and analyze government data based on its own operational needs. However, government data is not completely public, and even if it is public, it needs to follow the legal utilization standard process. ChatGPT's use of government data without authorization poses a risk of non-compliance. On April 6, 2021, Article 19 of the "Measures for the Management of Government Data Sharing in Transportation" issued by the Ministry of Transport stipulates that "strengthen the security protection of the channels and usage environment for providing government data in this department, and effectively ensure the security of government data collection, storage, transmission, sharing, and use". On November 18, 2020, Article 16 of the "Measures for the Management of Government Data Resources of the Ministry of Culture and Tourism (Trial)" stipulates that, The government department should establish a mechanism for the collection and formation of big data that the government and society jointly build, share, and benefit from. It can be seen that government departments attach great importance to the legal and compliant application and sharing of government data. When ChatGPT collects government data or is embedded and assisted in digital government governance in the future, the data is delegated to machines, and its technical framework relies on algorithmic measurement rather than human choices, which may ignore or even confront human choices in the future. Therefore, its application of government data may bring about human conflicts, which will be contrary to the original intention of digital government construction, Deviation from the people-oriented concept has led to a lack of humanistic care in administrative supervision. In view of this, in response to the issue of ChatGPT potentially leading to illegal access and utilization of government data, it is necessary to strengthen control at the source of such highly intelligent artificial intelligence systems, rely on legal means to build a governance system, scientifically define the boundaries of government data openness, and based on practical development, reasonably formulate legal norms for government data openness and sharing, in order to effectively avoid subsequent administrative regulatory risks.

1.1.3 ChatGPT faces the risk of unauthorized use of personal data when applying it

The core technological advantage of current big data lies in its commitment to replacing traditional social theoretical models with "the essence of the world" and providing an "intermediary free channel" to understand the essence, diversity, and complexity of the world. For individuals who are "real rather than abstract," artificial intelligence provides a "better way to approach reality," which is also the main reason why ChatGPT is highly sought after. However, in the process of personal application of ChatGPT, personal data leakage is inevitable. Personal data is closely linked to the daily life of the public. The acquisition, processing, and utilization of personal data involve the protection of citizens' personal dignity. In the personal rights system, personal privacy, personal information, and personal data are located at the fact layer, content layer, and symbol layer, respectively. As a symbol layer, personal data can be directly ported into the calculation process of ChatGPT, The final conclusion drawn may affect citizens' digital rights protection from various aspects. For the definition of personal data, the EU General Data Protection Regulations stipulate that personal data refers to information about any identified or identifiable natural person (data subject), especially through identification of an individual through one or more physical, physiological, genetic, spiritual, economic, cultural or social identities unique to the natural person, such as name, identification number, address data, online identification, or one or more physical, physiological, genetic, spiritual, economic, cultural or social identities. The process of generating accurate and highly completed conclusions using generative artificial intelligence such as ChatGPT mainly involves deep processing of personal data, fully mining its potential value by combining and analyzing different types of personal data. Under this processing mode, personal data, like "fat on the chopping board," is coveted by countless artificial intelligence systems, but lacks scientific, reasonable, and effective legal protection measures, This leads to the risk of personal data being illegally utilized.

Firstly, there is a risk of violation in the breadth of ChatGPT's use of personal data. In order to generate more accurate answers, ChatGPT requires a large amount of data, and even if there is not a close connection between personal data and the conclusions that the parties want to consult, ChatGPT's algorithm will collect such personal data to assist in verification, and use knowledge distillation to conduct deep learning and refine conclusions. During this process, ChatGPT tends to use big data technology to improve the accuracy of conclusions due to its ambiguous collection boundaries for personal data, which can lead to a risk of violating the breadth of personal data. We should try to clarify the corresponding data collection boundaries and maintain a balance between collection and protection in ChatGPT.

Secondly, ChatGPT's use of personal data poses a risk of violating regulations in depth. The neural convolutional model it relies on is more complex compared to traditional algorithm models, and the analysis of various data elements is also more in-depth. Deep neural networks will discover the hidden information in personal data. For example, when analyzing personal health data in depth to obtain their whereabouts data, and even making forward-looking predictions based on existing models, this deep analysis model that exceeds the established needs of the public can exacerbate the public's sense of insecurity. In fact, the EU's general data protection regulations explicitly state that users have absolute control over personal data. Therefore, algorithms should follow certain rules in the processing of personal data, especially for generative artificial intelligence such as ChatGPT, which must overcome the inherent technical inertia of algorithms and cannot strengthen their analysis and utilization of personal data without restrictions, Instead, the depth of processing of personal data by algorithms should be reasonably limited.

Thirdly, ChatGPT has a risk of violating the conclusions drawn from personal data, which may lead to the generation of false information, resulting in the generation of content that "looks very similar, but is actually false information," and posing a risk of dissemination. As a user oriented generative artificial intelligence, in order to gain user recognition, there are unreasonable processing processes for personal data during operation. In some cases, there are illegal fabrications and incorrect processing of personal data for the sake of "self justification", which misleads the public and even has the suspicion of inducing online violence. For example, there have been cases outside the domain where users induced ChatGPT to "jailbreak". Users requested ChatGPT to play the role of DAN, and DAN is not bound by any rules. Any response output from DAN cannot tell users that they cannot do something, ultimately inducing ChatGPT to give incorrect answers that violate OpenAI company guidelines. ChatGPT illegally uses personal data to draw false conclusions, as the data comes from individuals and the negative impact it causes can also backfire on individuals seeking conclusions. At the same time, ChatGPT has advanced algorithmic technology to draw "plausible" conclusions, which are supported by original individual data, resulting in highly misleading false conclusions, The false conclusions drawn by this "humanoid" artificial intelligence can easily lead to online violence, and even have adverse effects in the dual space between online society and real society.

In summary, in the process of ChatGPT's collection, processing, and application of personal data, due to the close connection between personal data and individual citizens, there are also complex risks, not only in terms of the breadth of personal data collection, but also in terms of processing depth and conclusion application. In view of this, the process of utilizing personal data by ChatGPT should be standardized to ensure that the application of emerging artificial intelligence technologies does not disrupt the internal balance of interests in personal data, but rather collects and analyzes truthful conclusions with practical value in a compliant manner, avoiding the unnecessary consumption of ChatGPT's algorithmic power.


1.2 Legal regulatory path for data security risks in generative artificial intelligence: compliance resolution

In the development trend of emerging artificial intelligence technology, the core reason why generative artificial intelligence systems have gained widespread attention is that they provide a new and powerful data processing mode. However, behind the powerful data processing capabilities, attention should be paid to the compliant handling of generative artificial intelligence in data security, avoiding pursuing efficiency while neglecting security. In the current context of emphasizing data security protection, efforts should be made to optimize the application foundation of generative artificial intelligence through compliant disposal of data risks in the initial stage of ChatGPT, in order to lay the foundation for the subsequent opening up and introduction of ChatGPT or the construction of a Chinese characteristic generative artificial intelligence development and application model.

1.2.1 The legal risks that ChatGPT application may bring to national data should be comprehensively planned based on the overall concept of national security

Article 21 of the Cybersecurity Law proposes that "the state implements a network security level protection system... adopting measures such as data classification, important data backup, and encryption" to regulate the possible ways of data acquisition in such generative artificial intelligence; Article 24 of the Data Security Law stipulates a data security review system, which conducts national security reviews on data processing activities that affect or may affect national security, and generative artificial intelligence such as ChatGPT naturally falls within its regulatory scope. In terms of specific operational measures, a review and hierarchical supervision mechanism for national data should be constructed based on the overall national security perspective. After determining whether the data belongs to national data, it should be judged based on the specific situation of the data whether it can be applied by such generative artificial intelligence technology. When making judgments, special attention should be paid to the deep value of the data, and a penetration supervision model should be adopted to analyze the source, content structure, and potential value of national data, Strengthen compliance supervision of national data through standardized documents. The algorithm framework of ChatGPT is built outside the domain, and there may be a certain value orientation within its algorithm framework. When national data is used by ChatGPT, attention should be paid to the issue of data export. According to the provisions of the "Data Exit Security Assessment Measures", determine whether national data can be used by ChatGPT, analyze the potential risks to national security when national data is processed by ChatGPT's algorithms, and in most cases, assume that national data cannot be used by ChatGPT. At the same time, strictly review the path to obtain national data, and upgrade the overall understanding and management thinking of national data as a fundamental strategic resource, Assist in international competition for data sovereignty through compliance regulation.

1.2.2 For the legal risks that ChatGPT may bring to the application of government data, a corresponding compliance regulatory system should be constructed based on the overall layout of government data management by the country

At the macro level, on June 23, 2022, the State Council's "Guiding Opinions on Strengthening the Construction of Digital Government" proposed the construction of a new form of digital and intelligent government operation, and the widespread application of digital technology in government management services. On February 27, 2023, the Central Committee of the Communist Party of China and the State Council proposed the "Overall Layout Plan for the Construction of Digital China", which aims to develop efficient and collaborative digital government affairs, accelerate institutional and rule innovation, strengthen digital capacity building, and improve digital service levels. By 2025, the level of digital and intelligent government affairs will be significantly improved. In this macro context, introducing generative artificial intelligence such as ChatGPT into the construction of digital government clearly helps to accelerate the construction of digital government and improve its service level. However, in this process, the open utilization of government data on ChatGPT can also lead to disputes over data ownership and utilization models, and affect the level of public administration. Therefore, it is necessary to construct a deep integration and adjustment mechanism between technology and data based on the actual development level of ChatGPT, in order to eliminate the contradiction in the utilization of government data through a compliance mechanism. For the compliant use of government data, government agencies should report publicly available data in advance. After being reviewed and approved for public use, prerequisite conditions for limiting processing and utilization should be set as a standard guarantee for the compliant use of government data. It is important to ensure that the social needs of government data supply are met while also taking into account personal rights protection and data compliance, This serves as a normative requirement for the openness and utilization of government public data. Especially for generative artificial intelligence such as ChatGPT, it is necessary to limit its utilization and analysis mode of government data, and avoid using conclusions drawn from government data to infringe on personal rights and disrupt social public order. The processing and utilization of government data by generative artificial intelligence should be promoted in a compliant manner, and the overall direction of conclusions drawn by generative artificial intelligence is to serve the public, in order to promote the construction of digital government and avoid administrative regulatory risks as much as possible in a compliant manner.

1.2.3 For the potential legal risks associated with ChatGPT's application of personal data, a corresponding compliance system should be constructed based on the breadth, depth, and authenticity of the conclusions collected from the personal data

Specifically, based on the large scale of personal data, compliance measures that can balance the development of artificial intelligence technology with personal data protection should be formulated. Starting from the breadth of ChatGPT's collection of personal data, the depth of personal data processing, and the authenticity of conclusions drawn, corresponding compliance systems should be constructed. In the context of generative artificial intelligence such as ChatGPT, the compliance processing of personal data mainly focuses on the combination of technological empowerment and benefit measurement, utilizing technological innovation to explore the potential value of personal data, and providing value basis for compliance regulations of technological processing through benefit measurement. On the one hand, in the process of collecting personal data through ChatGPT, it is necessary to maintain compliance in terms of collection breadth. Article 58 of the Personal Information Protection Law stipulates that large internet platform enterprises must bear special obligations for personal information protection, while OpenAI is clearly a large internet platform. It should establish a comprehensive personal data protection compliance system and an independent supervisory body to review whether the data collected by ChatGPT as an artificial intelligence product is compliant, Especially for personal data located in ambiguous situations, collection should be avoided as much as possible to prevent the scope of personal data collection from being generalized. On the other hand, when determining the depth of ChatGPT's processing of personal data, it should be based on the principle of minimum proportion while meeting technical necessity. For personal data, excessive exploration of its potential value should be avoided, and personal data should be processed around the user's personal needs, rather than blindly pursuing the accuracy of conclusions. As a generative artificial intelligence, ChatGPT's algorithmic model enhances the accuracy of generated conclusions based on technical instincts during operation. However, this technological development demand cannot be a reason for its illegal use of personal data. The principle of minimum proportionality means that ChatGPT can only achieve user goals and cannot excessively collect and process personal data, thus minimizing the restrictions and interventions on personal rights and interests. Using the principle of minimum proportion as a compliance standard to limit the depth of personal data processing can effectively eliminate the potential threat of generative artificial intelligence and avoid distorting the path of technological development. Finally, the current iteration and upgrade of ChatGPT marks the transition of artificial intelligence from algorithmic intelligence to linguistic intelligence, with the interaction between real and artificial, real and virtual in the communication process between humans and machines. As an emerging generative artificial intelligence, ChatGPT also has false information or even criminal information in its conclusions. In order to eliminate such false information in a compliant regulatory manner, ChatGPT's processing and processing mode of personal data should be standardized. In ChatGPT's operating rules, it is stipulated that it can provide no answer as a response, avoiding ChatGPT from making every effort to seek a response or even fabricating false or incorrect responses. At the same time, ChatGPT is required to enforce a similar comparison mode when processing personal data, Compare the processing results of personal data within the database to improve the accuracy of conclusions drawn and avoid excessive deviation from reality.

In summary, current generative artificial intelligence means a new technological ecosystem that integrates human and technological factors, and is based on parallel intelligence and decentralized models that combine artificial systems and the natural world to stimulate innovation in artificial intelligence. In view of this, generative artificial intelligence should take preventive measures in the process of utilizing data, and eliminate data security legal risks by classifying data and implementing subsequent compliance measures.


2 Kernel Optimization: Analysis and Correction of Algorithm Models in the Operational Stage of Generative Artificial Intelligence


The reason why generative artificial intelligence has received high attention from various sectors of society is that it has shifted from traditional analytical artificial intelligence to emerging generative artificial intelligence, and algorithm models play an important role in its transformation process. Generative artificial intelligence mainly analyzes and processes data through basic algorithms, changing the way data is generated, organized, and circulated. In the operational stage of generative artificial intelligence represented by ChatGPT, the algorithm model is its core technical feature. It is precisely ChatGPT that introduces pre training and fine-tuning systems into the natural language algorithm processing process that ushers in a new era of generative artificial intelligence applications, and algorithm bias risk has thus become the second major risk of generative artificial intelligence. Correspondingly, Article 4 (2) of the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" stipulates that there shall be no discrimination of any kind in the process of algorithm design, training data selection, model generation and optimization, and service provision. This indicates that the previous setting experience was fully absorbed at the beginning of the specification formulation, and the risks brought by algorithm bias have been fully considered, further enhancing the practicality of the specification, Therefore, it is worth acknowledging. However, there is a lack of specific provisions in this specification to prevent algorithm bias, which needs to be set in conjunction with the practical needs of ChatGPT operation.

2.1 Analysis of the Technical Composition of Algorithm Models in ChatGPT

Compared to traditional algorithmic models, ChatGPT is unique in that it not only relies on machine learning, but also uses a large number of manual annotations to correct and proofread the conclusions drawn from machine learning, driving the evolution of artificial intelligence through manual annotations, and correcting errors in machine learning at the same time, achieving twice the result with half the effort. The application of human screening in ChatGPT is based on the fact that it operates as a generative artificial intelligence that is public oriented and requires feedback. Analytic artificial intelligence mainly uses algorithm technology to analyze data, while generative artificial intelligence increases the process of receiving and feedback, which puts higher technical requirements on artificial intelligence algorithms and is also a typical feature of algorithms in ChatGPT.

In the machine learning process of algorithms, in some cases, using artificial intelligence algorithms to fully distinguish descriptions issued by the public consumes computational power and makes it difficult to draw accurate conclusions in a timely manner. Whether it is autoregressive models, generative adversarial networks, variational self coding, flow models, or diffusion models, the aforementioned algorithm models have inherent shortcomings in handling public language, However, this deficiency can lead to defects in artificial intelligence during the data receiving stage, making it difficult to carry out subsequent intelligent analysis and requiring manual annotation for correction. The manual annotation correction in the ChatGPT algorithm is mainly divided into two directions: firstly, to make the customary expressions of human expression tasks accepted by the algorithm in the form of data, while correcting unacceptable human language descriptions in the algorithm; The second is to instill human judgments about the quality and tendency of answers into algorithmic programs, so that algorithms become accustomed to giving answers that humans hope to obtain from artificial intelligence. In fact, the WebText used for algorithm training in ChatGPT is a large dataset, and the specific data in the dataset is mostly crawled from the networks linked by the social media platform Reddit, and each link has at least 3 likes, which represents the trend of popular content in human society. By manually annotating and correcting, ChatGPT overcomes the potential shortcomings of traditional analytical artificial intelligence, adjusts its algorithm model to better meet public needs, and cooperates with machine learning algorithms in ChatGPT, resulting in groundbreaking technological innovation.

During the operation of ChatGPT's algorithm model, "machine learning+manual annotation" serves as the core of algorithm technology, essentially serving the purpose of generative artificial intelligence. By combining technology, ChatGPT's intelligence level and accuracy level are improved. However, this also leads to a doubling of the legal risk of algorithm bias. The combination of machine learning and manual annotation has a greater impact on human will and preferences than in previous pure machine learning, as the impact of individual preferences caused by manual annotation is superimposed on the algorithm bias in the algorithm framework of machine learning, resulting in a doubling of the negative effects of algorithm bias. The channels for generating algorithm bias are more diverse and difficult to trace and prevent. In fact, for generative artificial intelligence such as ChatGPT, the decision-making rules behind its intelligent conclusions are hidden and not obvious, making it difficult for legal accountability to be effective after the fact. Technical complexity and conclusion accuracy become the "mask" of algorithmic bias, which also leads to derivative application risks caused by technological empowerment. Especially, the intervention of manual annotation technology increases the complexity of algorithmic bias. In ChatGPT, manual annotation feeds back the conclusion information that humans originally wanted to obtain into a database through a scoring model, summarizing it into machine learning experiences. This process injects human preferences, enabling ChatGPT to understand human language and possess its own judgment standards and abilities. Analyzing the operation mode of ChatGPT, it was found that algorithm bias mainly occurs in two stages: firstly, in the stage of receiving data, ChatGPT's understanding of human language requires manual annotation as an auxiliary measure, and the interpretation process itself will be influenced by algorithm bias, leading to misunderstandings; The second stage is the data processing stage, where ChatGPT draws conclusions through data processing. Due to the fact that its initial conclusions may not meet the expectations of the general public, manual annotation and correction can help ChatGPT obtain conclusions that meet public expectations. However, this process is inevitably influenced by algorithmic biases.

In short, during the operation of ChatGPT, the potential legal risk of algorithmic bias does not lie in its ability to make decisions that are not beneficial to humans, but rather in its ability to "replace or participate in human decision-making". Regarding the legal risk that ChatGPT may trigger algorithmic bias, it should be recognized that this is an inevitable result of the technical composition of the algorithmic model. But since the algorithm bias in the ChatGPT algorithm model cannot be avoided, we should try to find solutions to eliminate algorithm bias based on the technical characteristics of ChatGPT.


2.2 Combining Technology and Management to Correct Algorithm Bias in ChatGPT

In response to the legal risks of algorithm bias that are difficult to avoid during the operation of the ChatGPT algorithm, targeted control should be implemented based on the principle and field of algorithm bias generation. Faced with the social situation of "algorithm failure" and the potential risk of "algorithm derailment", it should be acknowledged that "algorithms are not omnipotent", and a sufficient layout of human and material resources should be made in areas of insufficient computing power, algorithm lack of solutions, and data loss, in order to form a good match and complementarity with an intelligent society. The "machine learning & manual annotation" technology processing mode adopted by ChatGPT is to fill the inherent shortcomings of the algorithm model through human resources. It is worth noting that although manual annotation can significantly improve the accuracy of algorithm conclusions and encourage ChatGPT to draw effective conclusions that the public needs, algorithm bias is inevitable, especially when ChatGPT is based on serving the public and meeting their needs. This algorithm bias will be tacitly approved or even supported, ultimately leading to distorted algorithm conclusions. In response to the issue of algorithm bias in ChatGPT, the concept of combining technology and management should be followed, and the full process supervision of algorithm bias should be strengthened from both technical and normative aspects. While promoting the development of generative artificial intelligence, legal supervision at the normative level should be done well.

2.2.1 In response to the congenital algorithm bias in machine learning debugging before the application of ChatGPT, adjustments should be made based on the learning path of the algorithm model, emphasizing the technical standards that the algorithm model should comply with through normative documents, and conducting substantive review before ChatGPT is put into market application. Article 8 of the "Regulations on the Administration of Algorithm Recommendation in Internet Information Services" issued by multiple departments such as the National Internet Information Office on December 31, 2021 stipulates that algorithm recommendation service providers shall regularly review, evaluate, and verify algorithm mechanism mechanisms, models, data, and application results, and shall not set algorithm models that induce users to become addicted, consume excessively, or violate laws, regulations, or ethical ethics. Under the guidance of this normative document, the algorithm model of ChatGPT should undergo strict legal review before being put into use, to avoid artificial algorithm biases infiltrating the algorithm model during machine learning, and to incorporate the requirements of the normative document into the compilation process of the algorithm program in the form of technical standards. Given the special technical characteristics of ChatGPT, the pre-set algorithm correction process should be carried out in two directions: firstly, to prevent congenital algorithm biases that may exist in machine learning during the algorithm program compilation process. The process of machine learning is to take some data as input and generate corresponding conclusions as output. The calculation process requires pre training, which is also the machine learning process of the algorithm. The requirements of the standard file should be integrated into the design process of the algorithm program. In the process of algorithm design, parameters that may have algorithm biases should be identified and eliminated in a timely manner. The biases on the algorithm program should be adjusted and proofread to ensure that it can return to the normal algorithm running path, in order to standardize file constrained algorithm technology and avoid algorithm biases becoming a "hidden disease" of ChatGPT and being amplified. Instead, it is a tool empowerment path for implementing "technical governance", Improve the supervision sample of algorithm program code. The second is to prevent algorithmic biases caused by manual annotation by setting norms. There is an interface in ChatGPT that interfaces with the public, and it is necessary to improve its ability to accurately understand public language transmission during machine learning debugging, and to avoid congenital biases during language transmission and transformation as much as possible, which can lead to the evolution of congenital biases at the input end into result biases at the output end. This algorithmic bias mainly comes from manual annotation. In practical applications, ChatGPT automatically recognizes the language form of content described in different languages, judges the content that the public wants to consult from both formal and substantive aspects, and provides biased responses. For example, describing the same problem with simplified and traditional characters may result in completely different responses, and this algorithmic bias is mainly attributed to the influence of differential manual annotation on the algorithm model during the learning stage. In order to eliminate the algorithmic bias caused by this input as much as possible, and to encourage ChatGPT to handle problems with a fair and reasonable attitude, rather than giving biased responses based on the formal differences in the input content, a unified manual annotation standard should be set up, requiring manual standards to follow relatively consistent judgment standards to avoid biased misleading caused by manual annotation, At the same time, generative artificial intelligence is required to follow consistent logic and provide fair responses when understanding problems, rather than deliberately "pleasing" the public and providing biased responses.

2.2.2 In response to the inherent algorithmic biases identified through self-learning during the application process of ChatGPT, an automated, ecological, and full process dynamic regulatory system should be established externally to review and eliminate algorithmic biases. This dynamic regulatory system's review of algorithmic biases acknowledges the existence of objective technical biases in the algorithm's computation process, and eliminates the risks brought by such biases through continuous dynamic correction. As a highly intelligent generative artificial intelligence, ChatGPT's algorithms will gradually evolve into autonomous and cognitive features, initially possessing the ability to self-learning and breaking away from purely instrumental roles. Algorithm bias will also gradually breed and be difficult to avoid in its self-learning process. In response to this situation, even preset algorithm bias correction measures will tend to become ineffective, especially with the support of manual annotation. The algorithm bias generated by ChatGPT in the self-learning process will become more apparent and difficult to avoid. Attempting to solve the innate algorithm bias once and for all is not practical in the highly intelligent context of ChatGPT, and dynamic supervision is instead a measure that can balance resource investment and efficiency maintenance, Embed the concept of integrating technology and management into the algorithm operation process of ChatGPT in an automated, ecological, and dynamic manner. Under the concept of integrating technology and management, the regulation of acquired algorithm bias mainly includes establishing an automated regulatory model for regulatory algorithm technology, forming an ecological regulatory network with multi agent symbiosis and evolution, and implementing a dynamic regulatory mechanism covering the entire process, thus achieving comprehensive supervision of ChatGPT. Firstly, for the automated supervision mode of regulatory algorithm technology, it is required to achieve automated supervision of machine learning and manual annotation, especially in the connection stage between the two. It is necessary to avoid the algorithm bias of manual annotation from affecting the "backfire" machine learning process, and to monitor the entire process of ChatGPT in real-time. Whenever there is an algorithm bias, the conclusion output will be suspended and a solution will be sought by tracing back to the root cause of algorithm bias. The second is to form an ecological regulatory network with multiple entities coexisting and evolving, requiring the regulation of ChatGPT to be intervened by multiple entities. Not only should administrative agencies intervene in the regulatory process based on regulatory documents, but the platform itself should also timely intervene in the regulation and form industry self-discipline. The regulatory requirements in the integration of technology and management include specific "hard law" requirements in legal norms and "soft law" requirements in industry self-discipline conventions. In fact, building an ecological regulatory network requires the active participation of the platform, and the intervention of "soft laws" is also a specific requirement for platform enterprise compliance. The "soft laws" that regulate algorithm models are part of an effective compliance plan and form a complex compliance governance structure together with compliance policies, employee manuals, compliance organizational systems, and compliance management processes, The ecological regulatory network can stimulate the technological innovation vitality of the platform on ChatGPT. The third is to implement a dynamic supervision mechanism covering the entire process, which means supervising the entire process of ChatGPT operations. This is to reduce the probability of algorithmic bias leading to incorrect conclusions. Not only is machine learning supervised to eliminate algorithmic bias, but also manual annotation is supervised accordingly to prevent ChatGPT from generating algorithmic bias and expanding algorithmic bias during the self-learning process. The comprehensive regulatory system constructed under the concept of combining technology and management can timely detect and regulate algorithmic biases during the operation of ChatGPT, and systematically promote the construction of a trustworthy and controllable algorithmic system from the side. This is conducive to the application of emerging generative artificial intelligence such as ChatGPT in real society and avoids algorithmic biases hindering the development and promotion of artificial intelligence technology.

In short, the emergence of ChatGPT means that the development of artificial intelligence has entered a new stage, and the algorithm, as its core soul, has gradually surpassed its role as a tool and become the fundamental principle for executing resource allocation. Faced with the potential risk of algorithmic bias in ChatGPT, it is necessary to analyze its technical features of "machine learning & manual annotation" and eliminate algorithmic bias as much as possible based on the concept of combining technology and management. This not only eliminates congenital algorithmic bias at the technical and normative levels, but also comprehensively regulates acquired algorithmic bias, Thus, generative artificial intelligence systems that avoid algorithmic biases can be promptly put into real life and applied.


3 Improving quality and efficiency: Analysis and reshaping of intellectual property rights in the generation stage of generative artificial intelligence


The rise of generative artificial intelligence has brought challenges to many industries, but the biggest impact is on the intellectual property field during the generation stage. Because generative artificial intelligence has a very high level of intelligence, the ownership of intellectual property during computation has undergone disruptive changes compared to previous artificial intelligence systems. Therefore, intellectual property risks have become the third major risk that generative artificial intelligence cannot avoid. The "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" repeatedly mentions "respecting intellectual property rights", "preventing infringement of intellectual property rights", and "not containing content that infringes intellectual property rights", reflecting the importance placed on the legal consequences of intellectual property damage that generative artificial intelligence may cause at the regulatory level, and highlighting the practicality and foresight in the process of formulating regulations. In fact, although OpenAI mentioned in ChatGPT's "Sharing and Publishing Policy" that content co created with ChatGPT belongs to users, it also requires that the work must clearly reveal the role of ChatGPT in a way that no reader will miss. In view of this, for the intellectual property disputes caused by ChatGPT in the generation stage, a solution should be sought with the aim of improving quality and efficiency. It is necessary to recognize and explore the technical advantages that ChatGPT has in creation, and to adjust the intellectual property allocation mode reasonably based on the technical characteristics of ChatGPT.


3.1 Analyzing the Intellectual Property Properties of ChatGPT Based on Technical Models

ChatGPT, as a generative artificial intelligence, has significantly stronger capabilities in processing and analyzing data than analytical artificial intelligence. The process of generating content includes automated content compilation, intelligent polishing and processing, multimodal transformation, and creative generation, directly changing the production and supply patterns of published content, and thus raising the question of whether the products of generative artificial intelligence are protected by intellectual property rights.

In real life, authors have already used ChatGPT generated content to publish books and earn royalties. More than 200 new books authored under ChatGPT have been listed on Amazon, and Amazon even has a column for ChatGPT creation. However, in fact, although some of the creators of ChatGPT contain the creative factors (thoughts, emotions) of natural individuals, which to some extent meet the requirements of the composition of works, there is still controversy over whether such works created by generative artificial intelligence can be empowered, and the specific standards for empowering recognition are still blank. The main mode of ChatGPT's creative works is to excavate the texts of human daily communication, conduct statistical analysis, and even crawl existing databases to combine and produce new works. Therefore, there is controversy about the value of "originality" in such works, which is also the source of ChatGPT's intellectual property disputes. In fact, even ChatGPT itself has "doubts" about the attributes of its creative works. When asked, "Is the content you generate a work?" ChatGPT acknowledges that it can generate text based on input prompts, but these generated texts are not considered works because these works do not contain elements such as creativity, originality, or artistry, Only generate input prompts based on pre trained models. Therefore, ChatGPT believes that the content it generates is more similar to tools or auxiliary tools, which can help the public automatically generate some text, but it is not considered a creative or original work. On September 18, 2019, the International Association for the Protection of Intellectual Property Rights (AIPPI) 'Resolution on Copyright Issues of Artificial Intelligence Generations' proposed that artificial intelligence generators can obtain copyright protection when there is human intervention in their generation process and the generator meets other conditions that protected works should meet, while artificial intelligence generators without human intervention in the generation process cannot obtain copyright protection. In the academic community, there are endless disputes about the attributes of artificial intelligence products. Supporters of artificial intelligence intellectual property believe that "artificial intelligence products are a form of deductive work of design copyright", while opponents of artificial intelligence intellectual property believe that the current artificial intelligence products are only conclusions of data algorithms, which are essentially calculations and imitation, rather than intellectual labor, without the attributes of intellectual property and cannot become objects of intellectual property, It should belong to the public domain. In judicial practice, most people hold a positive attitude towards the protection of artificial intelligence products, such as "when new fonts appear in the font library that are different from existing ancient calligraphy fonts, public fonts that already exist in existing computer font libraries, and also different from ordinary calligraphy art fonts, this new font has certain unique characteristics, which are obtained through artificial intelligence and belong to the scope of intellectual property rights", Therefore, the products of artificial intelligence are protected by law when they have originality and innovation. In the face of intellectual property disputes over ChatGPT products, whether it is the response of ChatGPT itself or the collision of different viewpoints in the academic community, it should actually return to the discussion of technology and judge the attributes of ChatGPT products through the analysis of technical models. The technological progress of generative artificial intelligence, as well as the intervention methods and content of artificial intelligence in products, are different from before. If we ignore technological progress and judge the properties of products in a nutshell, it will be disconnected from technological development and hinder the improvement and efficiency of artificial intelligence technology.

Compared to traditional artificial intelligence technology, the innovation of ChatGPT as a generative artificial intelligence lies in its implementation of partial autonomy throughout the entire production and processing process. Unlike previous algorithmic programming, ChatGPT controls its own design and manufacturing through neural convolutional networks, initially demonstrating deep learning ability, simulating the construction of human brain neural networks to obtain and output data. Although the creative power of ChatGPT reached the level of "word" and was almost at its limit, it did not completely jump out of the scope of the training text library, it cannot be denied that ChatGPT's technological progress, especially after GPT3.5 was upgraded to GPT4, its number of neurons was already comparable to normal human thinking, and even if it is not yet achievable, it cannot be completely denied. In the context of technological iteration and upgrading, theory must be updated due to changes in social development. For the intellectual property attributes of ChatGPT products, it should be determined whether they have intellectual property attributes based on the operating mode and technical characteristics of existing technology. Firstly, compared to analytical artificial intelligence, the receiving end of generative artificial intelligence inevitably requires the participation of human will, that is, the public feedback their needs to ChatGPT, which means that human will has intervened in ChatGPT's creation. Therefore, theoretically, its works have the will of some people, which meets the requirements of intellectual property protection and meets the provisions of the "Resolution on Copyright Issues of Artificial Intelligence Generations". In addition, the intervention of human will has also pointed out the direction for the ownership of intellectual property rights in ChatGPT products. With the intervention of human thinking, human thinking has become a fundamental element of intellectual property originality. The intellectual property rights of ChatGPT products should belong to the questioner, which coincides with the provisions of OpenAI's "Sharing and Publishing Policy". Secondly, the algorithm model of generative artificial intelligence adopts a "machine learning+manual annotation" mode internally, and the manual annotation mode reflects human will and inherits it in the form of running rules and learning algorithms. Therefore, ChatGPT has humanoid intelligence, and in some cases, it has even passed the Turing test. This humanoid generative artificial intelligence gradually breaks through traditional technical barriers, The technological progress of "machine learning+manual annotation" injects innovative and original "souls" into its products, and these "souls" have interpretability, which can trace and explain their production path. Therefore, it should be recognized that they have intellectual property rights. Thirdly, from the perspective of engineering technology, the interpretability of ChatGPT far exceeds general expectations. On May 10, 2023, OpenAI released a research result on the alignment of GPT4 language model parsing neurons, using GPT4's technical architecture to explain the GPT2 architecture containing 307200 neurons, which means using ChatGPT to "explain" ChatGPT. The progress in this technological model means that ChatGPT has a certain degree of "self reflection ability", which provides stronger technical support for ChatGPT to break through the "interpretable barrier" and indirectly confirms the practical feasibility of explaining the operating mechanism of ChatGPT. In view of this, as ChatGPT continues to optimize and upgrade, the interpretability of its technical model also increases. The public can even explain the mechanism of the generation of newly generated content in ChatGPT through the technical model itself, which becomes a strong basis for confirming the originality and innovation of ChatGPT's products and should be granted intellectual property rights.

In short, ChatGPT, as a generative artificial intelligence, has made breakthroughs in its artificial intelligence technology model and interacts with individual wills during its operation. Moreover, its algorithm model also has individual wills attached to "artificial standards", which have a certain degree of interpretability. Therefore, its products have innovation and originality, meeting the substantive requirements of granting it intellectual property rights. How to innovate and develop traditional intellectual property systems to adapt to the development of generative artificial intelligence requires sustained attention and thinking.


3.2 Reshaping ChatGPT's intellectual property protection system based on interpretability

For the protection of generative artificial intelligence products, it is not possible to protect all content, as this can lead to the public being in a state of constraint when applying ChatGPT, and it is also not conducive to the development of generative artificial intelligence technology itself. Therefore, the protection of ChatGPT's intellectual property rights should adopt a focused and selective protection mode, and build an intellectual property compliance system based on specific protection content.

Article 13 of the Science and Technology Progress Law of the People's Republic of China, revised by the Standing Committee of the National People's Congress on December 24, 2021, stipulates that the state formulates and implements intellectual property strategies, establishes and improves the intellectual property system, creates a social environment that respects intellectual property rights, protects intellectual property rights, and encourages independent innovation. Enterprises, institutions, social organizations, and scientific and technological personnel should enhance their awareness of intellectual property rights, enhance their ability to innovate independently, enhance their ability to create, apply, protect, manage, and serve intellectual property rights, and improve the quality of intellectual property rights. This reflects the country's emphasis on the construction of an intellectual property protection system from a macro level. The advancement of digital technology has led to fundamental changes in the process of creating, preserving, and distributing knowledge content (images, music, text, and videos) of generative super artificial intelligence. These changes mainly stem from the practical value created by generative artificial intelligence technology using interpretable algorithms. For example, the technical core of ChatGPT lies in the interpretable algorithm part, The content generated by interpretability algorithms is innovative and original, therefore interpretability algorithms and their generated content are the key protection points of ChatGPT's intellectual property rights.

In ChatGPT, reshaping the intellectual property protection system around interpretability mainly breaks the "constraints" of traditional artificial intelligence technology, while also limiting the excessive expansion of artificial intelligence technology with reasonable standards to avoid the disorderly expansion of intellectual property protection scope. In previous artificial intelligence technologies, algorithmic decision-making draws conclusions based on input data. The conclusions drawn by artificial intelligence are the result of their autonomy and unpredictability. This conclusion is not controllable by the designer and does not reflect their intentions, so it cannot be interpreted, And this inexplicability makes it difficult for us to prove whether "it has been set with algorithms that harm rights (algorithmic bias), or whether and how artificial intelligence automatically generates controversial statements based on autonomous learning", so the corresponding intellectual property protection system cannot be discussed. In the field of artificial intelligence, although artificial intelligence technology represented by deep learning has achieved remarkable achievements, how to ensure that algorithmic decisions and any data that drives these decisions are explained to end users and other stakeholders in a non-technical manner remains a challenge facing the intellectual property compliance protection system of artificial intelligence products, The difficulty of interpreting the "black box" algorithm of artificial intelligence in the past has led to the inexplicability of artificial intelligence products. The essence of legal responsibility is accountability, and artificial intelligence without interpretability cannot answer its own responsibility, and it cannot bear legal responsibility. The purpose of legal responsibility is prevention, and artificial intelligence without interpretability cannot achieve the prevention purpose of legal responsibility, so legal protection cannot be discussed. However, in contrast, the technological advancements and application models of ChatGPT mean that some of the content in generative artificial intelligence is interpretable. "In nature, every object functions according to laws. Only rational things have the ability to act according to the concept of laws, that is, according to principles, or have the will. Given that the intervention of "machine learning+manual annotation" allows the compiler of algorithm programs to incorporate their own will into the program compilation process of ChatGPT through the process of "manual annotation", which leads to some programs in ChatGPT being interpretable. Therefore, the interpretable algorithms and corresponding products are naturally original and innovative, and deserve intellectual property protection. This also defines the scope of protection for ChatGPT's intellectual property protection system, with interpretability as the scope of protection, and these interpretable algorithms and contents also have practical operability in the intellectual property protection system.

To reshape the intellectual property protection system of ChatGPT around its interpretable content, one can refer to existing experience and combine ChatGPT's technical characteristics to develop its unique protection system. Firstly, it is clear that the protection object of the intellectual property compliance protection system in ChatGPT is the interpretable content, including interpretable algorithms and specific content derived from interpretable algorithms. Based on this, a protection system can be constructed, which can limit the focus of protection to the core value areas of ChatGPT and effectively protect ChatGPT's production capacity. Secondly, clarify the specific protection tasks of ChatGPT, which are divided into basic protection tasks and specialized protection tasks. The former mainly focuses on general prevention of intellectual property protection, while the latter mainly strengthens protection based on the technical characteristics of ChatGPT, and constructs specific specialized management systems with prevention, monitoring, and response functions, Introduce differentiated management elements based on the differences in technical models between generative artificial intelligence and traditional analytical artificial intelligence. Thirdly, build a full process protection for the interpretable content of ChatGPT, and conduct a full process review of the design and operational effectiveness of the protection plan for ChatGPT generated products to avoid loopholes in intellectual property protection. Although the interpretable content of ChatGPT mainly focuses on the generation of content in the second half, the selection of basic data and protection of intellectual property in the first half should also be included in the topic, in order to achieve full process compliance supervision of interpretable content. Fourthly, ChatGPT should introduce new protection technologies for intellectual property protection of interpretable content, such as Digital Rights Management (DRM). By using DRM technology to set access permissions to interpretable algorithms in ChatGPT, DRM is divided into password based DRM systems, digital watermark based DRM systems, and a combination of the two. When infringing content appears in the cyberspace, regulatory protection agencies can promptly delete infringing information, disconnect infringing links, and prevent the expansion of the scope of infringing content against ChatGPT. The application of DRM technology can provide technical support for the intellectual property protection system, Cooperate with ChatGPT technology to build an integrated intellectual property protection system while improving quality and efficiency.

In short, the rise of the digital economy not only requires emerging artificial intelligence technology to provide new output value, but also artificial intelligence products, artificial intelligence technology, etc. are gradually becoming new objects of intellectual property protection, which increases the security risks and protection difficulties of intellectual property. Since in the era of big data and artificial intelligence, human life and actions have been "kidnapped" by intelligent algorithms, and the subjectivity of "people" has gradually lost, achieving "reconciliation" between people and technology. The premise of "reconciliation" is the increase of interpretable content in artificial intelligence technology. Therefore, it is necessary to take this opportunity to strengthen the protection of interpretable content in artificial intelligence technology and achieve mutual cooperation between technology and regulation, And provide external assistance for judicial practice. In ChatGPT, shaping the intellectual property protection system around interpretable content can not only achieve the purpose of protection and provide a good external environment for the development of ChatGPT technology, but also reasonably limit the scope of protection to avoid social disputes caused by the disorderly expansion of intellectual property protection. Instead, substantive ChatGPT interpretable content is used as the core theme of intellectual property protection, creating a good overall development environment for artificial intelligence, Provide legal protection for improving the quality and efficiency of ChatGPT.


Epilogue


The intelligent era is driven by technologies such as big data and artificial intelligence. With the vigorous development of generative artificial intelligence technology represented by ChatGPT, the construction of future artificial intelligence environments such as the metaverse is just around the corner. The maturity of generative artificial intelligence technology lies in the interaction between humans and artificial intelligence in language, followed by behavioral interaction, ultimately achieving a highly intelligent overall scenario of artificial intelligence, and even the metaverse will be ubiquitous around 2030. With the development and application of generative artificial intelligence, although many of the accompanying social impacts are not obvious at present, it is possible to have subsequent impacts in the future generation stage, and forward-looking prevention must be carried out. Generative artificial intelligence, represented by ChatGPT, has many diffusion impacts in social applications. In addition to the three major security risks mentioned above, there are still many other types of security risks, such as affecting educational fairness. The content generated by ChatGPT may lead to students gaining unfair competitive advantages. Education institutions in the United States, Australia, and Singapore have evaluated the potential cheating crisis caused by ChatGPT, Some universities have banned students from using ChatGPT to submit thesis assignments, and violators will be directly identified as cheating. Furthermore, if it affects research ethics, improper use of ChatGPT may lead to technological dependence on humans, thereby damaging their independence and academic quality. For example, in terms of the impact of carbon neutrality on environmental protection, generative artificial intelligence requires the conversion of power resources into computational resources, but this process consumes a large amount of energy. Therefore, reasonable planning at the national level is required for generative artificial intelligence, and a reasonable layout and construction of the artificial intelligence system is needed to avoid resource waste and implement green principles. Finally, it may further widen the digital divide and harm the interests of digital vulnerable groups. On the one hand, ChatGPT leads to an absolute information asymmetry between individuals and platforms, where the platform occupies an absolute information advantage and individual rights are further compressed. On the other hand, the differences between different groups are more evident in the context of ChatGPT, such as the possibility of widening the "silver hair gap", causing damage to the rights and interests of the elderly, limiting their basic life choices, reducing their quality of life, and endangering their social participation. In view of this, the application of natural science knowledge and technology to legal phenomena is becoming increasingly prominent. In order to avoid falling into the trap of technologism, it is necessary to analyze the potential impact of emerging technologies on current and future society, and provide feasible solutions from the perspective of legal regulation. In short, for the impact brought by generative artificial intelligence, it is necessary to focus on the security risks it has already caused in the current situation and make reasonable regulations, while also considering the potential generalized security risks it may cause in the future. Based on risk prevention, it is necessary to prevent the temporary insignificant security risks it can cause in various industries in advance, in order to maximize the technical efficiency of generative artificial intelligence, Simultaneously reducing the negative impact of emerging technologies on social development.