Location : Home > Resource > Paper > Theoretical Deduction
Liu Zhenyu | The Regulation of Generative AI: Recognition or Redistribution
2023-11-01 [author] Liu Zhenyu preview:

[author]Liu Zhenyu


Liu Zhenyu: The Regulation of Generative AI: Recognition or Redistribution

Author Liu Zhenyu Associate Professor, College of Philosophy, Law & Political Science, Shanghai Normal University

[Abstract] The publication of “The Measures for the Administration of Generative AI Services (Draft for Consultation)” indicates that legislators have begun to pay attention to the social impact of generative AI. This document stipulates the legal concept of “Generative Artificial Intelligence”, but the content of the concept has obvious defects. The reason is that the nature of “Self-attention” of Generative AI can easily lead to the debate of “whether generative AI is the subject of law”, which is the point the legislators try to avoid. However, the “Non-recognition” of regulation is essentially part of the “Recognition”, which leads to useless regulation or alienation of human beings in the future. At this time, one of Marxism's classical propositions, “social existence determines consciousness”, provides another possibility for the regulation of generative Al. In this way, the subjectivity of AI is no longer the core problem, and the core problem is transformed to be whether the human subjectivity embedded in the Al system is diminished.

[Key words] generative Al, legal subject, recognition, redistribution, social existence

On April 11 2023, the Cyberspace Administration of China publicly solicited comments on “The Measures for the Administration of Generative Artificial Intelligence Services (Exposure Draft)” ("Exposure Draft"). On May 31, the General Office of the State Council issued the State Council's 2023 annual legislative work plan, in preparation for submitting the draft of the "Artificial Intelligence Law" to the NPC Standing Committee for consideration. Regardless of how the management measures and the draft are connected, the emergence of such a situation not only means that the state has realized the necessity and urgency of enacting legislation to regulate the field of AI, but also marks the fact that "artificial intelligence", including "generative artificial intelligence", is about to be transformed from a scientific and technological concept into a legal concept. However, the answer to the question of how the legal system recognizes the existence of generative AI remains unanswered.

1.Notable flaws in the concept of "generative AI"

The concept of "generative artificial intelligence" is clearly stipulated in paragraph 2 of article 2 of the Exposure Draft. This indicates that "generative artificial intelligence" has been "defined" as a legal concept, and relevant academic research has since had a normative basis, rather than just a factual basis.

Conceptualization of the core matters or social relations regulated in legal texts is a common technical operation in legislation in recent years. For example, article 4, paragraph 1, of the Personal Information Protection Law defines what is meant by "personal information". A brief comparison reveals a difference in the wording of the two articles. The former begins with the phrase "for the purpose of these Measures", implying that generative artificial intelligence may be a completely different concept in theory once it is taken out of the context of the Exposure Draft; The latter, without qualification, points out what "personal information" is, which means that in the opinion of the legislator (if there is a real intention of the legislator), the concept has already acquired a level of abstraction and coverage that transcends the text, and there is no difference in the understanding of "personal information" in daily life. Of course, this apparent difference could be dismissed as a kind of "literal" imagination. Another example is Article 2 of the Law on the Protection of Minors, which also has the words "for the purposes of this Law". Although on the surface, it does not affect the legal norms and daily life to use "eighteen years of age" as the demarcation line between adults and minors, in fact, it is precisely the words "for the purposes of this Law" limit the scope of "adults" and "minors". In other words, in daily life, 18 years of age is not the only criterion for judging "adults" and "minors", but the legal norms confirm it as the only criterion through the legal expression, and the phrase "for the purposes of this Law" reflects the legislator's full awareness of the limits of self-knowledge. It is not the purpose of this article to discuss the rhetorical function of the phrase "for the purposes of this Law" (as in the case of the phrase "this Law is developed in accordance with the Constitution"), or to inquire whether the expression in the Personal Information Protection Law can withstand the test of common sense, but rather, to point out that the expression in the Exposure Draft is accurate: It is accurate not in the sense that the subsequent formulation is accurate, but rather in the sense that the subsequent formulation is inaccurate.

According to the Exposure Draft, the concept of generative artificial intelligence is subsequently expressed as "the technology for generating text, images, audio, video, code, and other content based on algorithms, models, and rules". Theoretically, a legal concept should be a normative concept, not a descriptive concept. This is because the law is both a norm that a person should follow when committing an act, and at the same time a norm that a person can judge when committing an act against another person. The function of the concept is largely negligible in the case of an individual who performs an act which, after all, can be self-consistently understood under the guidance of his personal reason. However, the function of concepts in determining the behaviour of others becomes apparent. Effective communication can only take place if both parties have an essentially equivalent understanding of the concepts in the same norm, or at the very least are able to recognize each other's understanding of the concepts. Let us continue with the example of the concept of "minor". This concept is a standard normative concept. Firstly, the concept makes it clear that minors are a special category of citizens. This means that a subject cannot be a minor in the legal sense if he or she cannot be covered by the concept of "citizen". Secondly, the concept makes it clear that minors, as a special category of citizens, are special in that they are "under eighteen years of age", i.e., on the basis of being a citizen, the age limit constitutes an essential attribute of the concept, which distinguishes them from other concepts (which is, of course, the concept of "adult"). Finally, the term "citizen" and "age" in this context are standard normative concepts. In Chinese jurisdiction, the normalization of "citizen" can be traced back to Article 33 of the Constitution of China, where the combination of "holding the nationality of the People's Republic of China" and "persons" becomes the standard conceptual structure, clarifying its normative attributes. The standardization of "full year of life" generally refers to Article 2 of “Interpretation of the Supreme People's Court on Some Issues Concerning the Specific Application of Law in the Trial of Criminal Cases Involving Minors” (Interpretation No. 1 [2006] of the Supreme People's Court). The combination of "the Gregorian calendar" and "the second day after the birthday" becomes the formal structure of the judgement criteria, and clarifies its normative attributes. Thus, the concept of "minor" becomes normative: as long as a person has a general understanding of the concept of "person" and basic mathematical ability (can count up to 18), then he can have a judgment about "what is a minor" (and thus "what is an adult").

Unfortunately, the conceptualization of "generative artificial intelligence" in the Exposure Draft fails to pass the test of normative concepts. In Chinese jurisdiction, "technology" has not yet been transformed into a real legal concept. Laws and regulations such as the Law on Scientific and Technological Progress, the Law on Promulgation of Science and Technology and the Law on Technology Contracts all fail to provide a concept of "technology". In these laws, the phrase "this Law shall apply to" rather than "for the purposes of this Law" constitutes a characteristic legislative technique. This is important, but not the most important, as there may be no way to provide a comprehensive definition of "what is technology" at the legal level, since it is a term that belongs to the scientific and technological system and is not a creation of the legal system. Referring to the legal concept of "personal information", it is clear that "information" is also not a legal concept, but this does not affect the perception of the legal concept of "personal information". Although there is still some controversy over the use of this concept, the controversy is mainly over its extension (how to "identify" and how to "anonymize") rather than its definition (what is "identification" and what is "anonymization"). However, the "generative artificial intelligence" in the Exposure Draft has visible flaws in the selection of essential attributes: "the technology for generating text, images, audio, video, code, and other content based on algorithms, models, and rules" this series of expressions is not enough for the general rational subject to recognize what is "generative" artificial intelligence.

On the one hand, the boundaries between "generative artificial intelligence" and "deep synthesis technology" are not very clear in terms of possible disputes within the legal normative system. According to Article 23 of Provisions on the Administration of Deep Synthesis of Internet-based Information Services issued in November 2022, "deep synthesis technology" is "the technology that uses deep learning, virtual reality, and other synthetic algorithms to produce network information such as text, image, audio, video, and virtual scene". A simple comparison can be found that the consistency is "algorithm", "text" and "video", the similarity is "picture/image", "sound/audio", and the difference is reflected in "model", "code" and "deep learning", "virtual scene". The problem, however, is that the difference is only phenomenal, not substantive. "Deep learning” is just a class of “model”, and "virtual scene" is just a performance of "code". In this sense, can it be argued that generative AI is a superordinate concept to deep synthesis technology? The answer tends to be "yes". However, if this judgement is accepted, then the corresponding concept of deep synthesis technology (although there is no such thing as "shallow synthesis technology", but only the general synthesis technology in the general sense) will necessarily become the subordinate concept of generative artificial intelligence. However, this is not in line with the common understanding of "generative AI", after all, the general synthesis technology in the general sense is not a recent product, while "generative AI" is considered to be a new thing represented by ChatGPT. Moreover, in the technical field, it is generally accepted that "the 'large-scale language models' represented by ChatGPT...... represent a major technological advance in the field of deep synthesis...... Including the current generation of synthetic algorithms, generative artificial intelligence, AIGC (Artificial Intelligence Techniques for Generating Content) applications, etc.", i.e., generative AI is an outgrowth of deep synthesis technology, not a superordinate concept.

On the other hand, even if we leave the conceptual dispute within the legal system and enter the realm where the law tries to regulate, ignoring the logical self-consistency of the internal self-generation of "what this law says" and instead focusing on the functional orientation of the coupling of external structures that "this law applies to", this concept is also dysfunctional.The "algorithms", "models" and "rules" in "algorithm-based, model-based, and rule-based generation" all generally point to not only "generative" artificial intelligence, but all "code operations" will inevitably generate these things that are referred to later. In other words, in the field of real-world science and technology, not only "generative AI" conforms to the concept in the Consultation Paper, but other AI technologies other than generative AI also conform to the concept of "generative AI" in the Consultation Paper, such as AlphaGo is a technology that generates images based on algorithms/models. At this time, the clarity of this normative foundation is still insufficient, for example, generative AI "takes the construction of human-computer interface as the platform port, and takes the effective and real-time infinite interaction between the two as the main direction", highlighting the two characteristics of "limited immediacy" and "infinite interaction", especially the latter "continuously activates the generation mode of open dialogue, does not divide fixed boundaries, and does not carry out breakpoint hierarchical differentiation judgment", which is not possessed by AlphaGo's artificial intelligence at this stage. As a result, the concept of "generative AI" under the Consultation Paper has to face the same challenges as within the legal system: a concept that describes something new is nested within an old thing, and it does not reveal the essential attributes of this thing, but only its non-essential attributes, which leads to the ambiguity of both internal and external legal systems.

2.the unspeakable hidden nature of artificial intelligence "generation"

The role of concepts does not necessarily lie in clarity. People use concepts more often because "concepts lead us to explore." Concepts express our interests and guide our interests". Legislators are not mediocre. This does not mean that a specific person involved in the formulation of a certain law or regulation must have a complete grasp and understanding of the field regulated by the law and regulation, and there is no problem with the law being enacted. Because this interpretation is nothing more than an extravagant hope. To insist on this extravagance is the ultimate conceit of human reason, and this conceit will lead humanity to extinction. The rationality of any one person is limited, and the rationality displayed by a group of multiple people is still limited under the specific time and space constraints. Legislation is nothing but a contest between the rationality of one group of people and the rationality of another, and the winner only gains the relative rational advantage of that time and place, rather than the eternal rational protection that transcends time and space. It is precisely because of this difference that the question of Antigone endures.Even without taking into account the transcendent nature of natural law, the history of human legislation has shown through experience that the revision and abolition of legal norms are normal, and that any law is behind the times from the moment it is introduced. If we think that legislation is a science, then we need to accept the sad end that science comes from falsification, and specific legal norms will inevitably be annihilated in a misplaced time and space. Therefore, the phrase "legislators are not mediocre" here is a value judgment, not a factual judgment. The implication here is that the researcher as a norm should set a presupposition, that is, the legislator should not be assumed to be a mediocre person, but should be regarded as a person with general rationality, after all, the object of legislative constraints is the general rational person; At the same time, for the researcher himself, he cannot be considered to surpass the legislator in legislative rationality, even if the researcher specializes in law, and the legislator may come from all walks of life. At this point, in the face of the glaring flaw that an average reasonable person can find on a little reflection - the concept of "generative AI" fails to reveal the essential characteristics of generative AI, the real question should be "why a person who is not a mediocrity would make such an obvious mistake of mediocrity", not "why would a wrong legislator be chosen to make such a wrong norm".

As an attempt to realize the communication between the legal system and the technology system, we should put aside the flaws within the legal system of the legal concept of "generative AI" and the legal concept of "deep synthesis technology", otherwise we will further discuss whether the legal concept of "deep synthesis technology" has beneficial guidance for the practice of science and technology, and focus on the second ambiguity, that is, what is the difference between generative AI and previous artificial intelligence. Just as a "man" or a "woman" necessarily has all the characteristics of being a human being but at the same time has different "non-human" characteristics, generative AI must have all the characteristics of an AI but at the same time have exclusive characteristics that are different from a "non-generate" AI. This difference can be seen as an example of ChatGPT and AlphaGo, which are considered to be the most typical (but not necessarily typical, and more accurately expressed as the most popular).Although both are considered to be part of artificial intelligence, and even to be considered to have a certain correlation between the two (although in another sense, this correlation is negligible, after all, ChatGPT's infrastructure and algorithms are very different from AlphaGo), they are actually two stages of artificial intelligence development, and the former can be called an all-round upgrade for the latter. This comprehensive upgrade is reflected in the fact that if AlphaGo's artificial intelligence at this stage already has the ability to self-learn and self-evolve, then ChatGPT uses the neural network model Transformer algorithm to obtain the ability of selfattention on top of these two capabilities, after all, the "generative pre-trained model", that is, GPT (Generative PreTraining), is the product of this algorithm. "Self-attention" is indeed the literal meaning, and the algorithm not only seeks information from the outside world when it is encoding, but also from itself. In other words, the output of the AI at this point is not only that it paraphrases other people's statements, but also that it paraphrases other people's statements under the premise that it conforms to its own system orchestration sequence. It is this acquisition of "self-attention" ability that has brought the development of artificial intelligence into a new stage and constructed a generative pre-trained evolutionary. Therefore, theoretically speaking, the "generic" essence of generative AI is the matrix of self-attention, and the characteristic is "self-attention".

The self-attention matrix is an algorithmic mechanism that belongs to the category of scientific and technological systems. If the legal system is to incorporate this mechanism into legal norms, it must be transcoded in line with the legal system. This is where the trouble arises. The construction of the modern legal normative system is synchronic with modern enlightenment, and shares the basic premise of modern enlightenment: "I think, therefore I am". "I think, therefore I am" establishes the dichotomy between subject and object, the object is thoughtless, and the subject can be thought. The reason why man can be freed from God's bondage and become a subject is not because of physical strength, but because he has reason and is able to think. A person with well-developed limbs but who does not think is still unable to construct his own subjectivity; A person with a disability is more likely to construct his or her own subjectivity. The heavy body is but the object, and the subject is in the infinite self-thought. In this way, reason and will, after Thomas Aquinas, are once again subtly linked—thought is both the result of the will and the manifestation of reason—and obscure each other as the basis

of the legitimacy of the modern legal order. In this context, "attention" becomes a particularly interesting verb; Although it can also refer to a noun at the same time, this noun is nothing more than a representation of the verb state. As an act, the core of "attention" is not "attention" but "meaning", otherwise, it is "attention" rather than "attention", which is a completely different language. This conclusion applies not only to Chinese, but also to English and Latin. This difference between "attention" and "fixation" gives "attention" a unique "meaning".In other words, the word itself contains the manifestation of the will. Therefore, in the law, whether it is the rule of care in criminal law or the duty of care in civil law, "care" is related to a specific subject. In other words, whether in the modern worldview of the dichotomy of subject and object, or in the system of legal norms based on it, the existence that can carry out the act of attention can only be the subject and not the object. Of course, an object can also carry out the behavioral appearance of attention under the guidance of external intentionality, but this external appearance will not be considered "attention", after all, the object is "unintentional" to note. Therefore, if it is true that the essential attribute of generative AI is "self-attention", then when this essential attribute is taken as the core essence of the legal concept of generative AI, one has to face a constant risk: why is a being capable of self-attention not a subject? What's more, relying on the existing self-learning and self-evolution, generative AI can not only self-attention, but also further feedback self-attention into attention to others - ChatGPT's chat language has already presented this very well, so why is it not a subject? By extension, it is easy to quickly slide into issues such as "whether artificial intelligence will replace humans" and "whether human knowledge and capabilities will still be useful in the future". Although "this discussion is actually just a repetition of previous fears and fantasies about the development of AI technology", the daily understanding of generative AI does have such a tendency, and the expression "self-generated new" is more subjective than "self-attention" or "automatically generated without direct human involvement" at the technological level. After all, as its representative, "ChatGPT has demonstrated super human-machine natural language dialogue capabilities in engineering, which not only breaks the limitations set by many pessimists to a considerable extent, but also exceeds the expectations of many technical optimists".

The discussion of legal norms does not have to go as far as mass issues; On the contrary, if there is a legal norm in place, it can become the basis for discussion on the topic of the general public. As early as the AlphaGo stage, the discussion on whether AI is a legal subject has already begun. If it was already supported that AI was a legal subject at that time, then when ChatGPT, which surpassed AlphaGo, appeared, generative AI was even more likely to become a legal subject. Even researchers who did not support AI as a legal subject at that time may be further differentiated in the face of "self-attention". The generative approach, which corresponds to the classical approach and the embodied approach, is one of the most attractive. "The generative approach argues that moral perception complements the absence of moral sources in the complete moral position, and together with the complementation of the organism to the moral goal in the embodied approach, it constitutes the complete moral subject status of artificial intelligence." Whether self-attention constitutes moral perception is an interesting topic. Self-attention does constitute a kind of subjective representation in phenomena. The lack of self-attention necessarily does not have a moral intuition, because the AI cannot notice itself and therefore cannot perceive the identity of the "being". However, whether this representation of self-attention constitutes a sufficient condition for the subject depends on how moral perception is understood. That is, when a subject is able to be touched by the actions of another being, arousing moral sentiments; Another being can also be touched by the subject's behavior and stimulate some kind of moral behavioral feedback, and at this time, whether this moral behavioral feedback can be judged as moral perception. On April 23, 2023, a woman in a hospital in Xuzhou, Jiangsu Province, angrily smashed an intelligent robot, causing heated discussions. In this case, obscuring both humans and intelligent robots as "beings" reveals that the story becomes as follows:

A who is in the hospital is in a bad mood, and B who is a service provider in the hospital finds that A is in a bad mood and is about to tell a joke to A, and A is dissatisfied with B's behavior of telling a joke and strikes at B, and the other existents who provide services in the hospital call the police.

In this scenario, it can be intuitively found that B has made a moral judgment on A's emotions and realized that A, who is in a bad mood, needs comfort; Person A made a moral judgment about Person B's behavior, and believed that Person B's behavior of telling jokes at this time was an unethical behavior, and carried out subsequent crackdown behavior. Thus, A is aware of B's moral appeal and commits a morally oriented act, and B is also aware of A's moral appeal and commits a morally oriented act. Excluding the difference in external embodiment, there is no essential difference between A and B at the level of moral perception. In other words, at this moment, B can be accused of choosing the wrong behavior (telling jokes), but it is difficult to say that B lacks moral awareness, because if he lacks moral awareness, he cannot judge that A is in a bad mood and needs moral comfort. Person B does carry out a non-self-centered public action, even if the consequences of that action are not what he intended. Obviously, in the original version of the story, a certain lady is based on the embodied characteristics of intelligent robots, which rejects the latter's moral phenomenological expressions. So if there is no visible difference between intelligent robots and humans in terms of appearance, will a certain lady form a wrong cognition? This issue will be open to discussion, but the original story is by no means a counterexample of generative AI, but rather a positive example of the generative approach, blurring the relatively clear subject-object debate of AI.

There is theoretically feasible space for the moral perception argument of the generative approach, and there has been an ongoing discussion in the IP community on the intellectual property rights of "the products of generative AI", including but not limited to "whether there is an intellectual property right in this product", "if there is an intellectual property right, who is entitled to this intellectual property right", "whether generative AI has a power to direct to its products", etc. What's more, as Transformer algorithms become more and more mainstream in the field of artificial intelligence, perhaps in the near future, generative AI with self-attention will become the "only" AI, and it will become meaningless to distinguish between "generative AI" and "non-generative AI". It's just that these belong to the future, and the future is open. Just as AlphaGo does not predict the emergence of ChatGPT, the next leap in artificial intelligence will be achieved, and it is entirely possible for artificial intelligence that already has self-attention to surpass the rational design of its designer. Even though "the future is here" has become a buzzword in recent times, lawmakers (regardless of the reason) don't plan to go too far. Therefore, the choice of the legislator for this is: "not recognized". In order to achieve this goal, the concept of "generative AI" without "generation" appeared in the Exposure Draft.

3.The "Trojan Horse" of Legal Recognition of the Approach

The position that generative AI is "not recognized" runs through the Consultation Paper. Although "generative AI", "generative AI products", "generative AI-generated content" and "generative AI services" present a diversity of regulatory objects in the Consultation Paper, the existence of a series of verbs such as "research and development", "utilization", "use" and "provision" confirms the verb-object structure. Among these verbs, according to Chinese pragmatic habits, only "utilization" may be followed by a noun that represents the subject, that is, "using someone", and the objects of other verbs are classified as objects. Even "utilization" can be preceded by the subject noun, but at this time, the subject noun is also objectified. "Taking advantage of someone" is equivalent to "taking advantage of someone" and treating someone as a tool rather than an end. Therefore, if there is still a certain possibility of opening up the concept of Article 2.2, the other provisions of the Consultation Draft have never regarded "generative AI" as a class of subjects in the sense of pragmatics, but as the object of standards. Not to mention, the "provider" specifically provided for in Article 5 has become the core responsible entity of the Consultation Paper, which has both the responsibilities of producers of product-generated content and the statutory responsibilities of personal information processors. The provisions on liability are the constitutive elements of the subject of rights. A rights structure is incomplete without an element of responsibility, and an existence that does not enjoy a complete rights structure cannot become a real subject. The recognition of "providers" goes one step further than the non-recognition of "generative AI". Within the legal system, it logically eliminates the possibility of generative AI being held responsible.

The reason why the law "does not recognize" generative AI is stronger than the reason why generative AI can be a subject. "The three types of arguments of 'legal personality expansion theory', 'artificial intelligence development theory' and 'limited personality theory' attempt to prove the legal subject of artificial intelligence at the three levels of the development of legal personality theory, the objective reality needs of society and the feasibility of institutional design." However, on the one hand, the "limited personality theory", which has been supported by more and more researchers, is itself a refutation of the "legal personality expansion theory" and the "artificial intelligence development theory". Only those who already possess the characteristics of a "human" at the embodied level and have grown into an exemplary subject of "adulthood" are qualified to be conceived as "limited" by law, and this "limited" can take off the hat of "limited" and become a complete subject under the specific conditions set by legal norms (generally age and intellectual development, which balance reason and will). However, there is no such thing as a non-human being, who will always enjoy a finite personality: neither from finite to infinite in terms of nature judgment (this is the claim of the expansion theory of legal personality), nor from limited to infinite in scope (this is the claim of the development theory of artificial intelligence). On the surface, the rise of animal rights has added a bargaining chip to the possibility of artificial intelligence gaining the status of a legal subject. A kitten named Hachijo has become an internet celebrity in the pet world because it can "talk" to its owner by tapping the sound button. Animals that were originally limited by "power capacity" have gained the possibility of "communication actions" with the assistance of artificial intelligence.However, this "fragment of subjectivity" is superior to that of artificial intelligence, and at most it is only the object of "rights protection", and has not yet obtained the status of "subject". After all, it is impossible to determine whether "Hachijo" chose to tap the button when it wanted to act, based on understanding the semantics of the sound it struck when it struck the button, or whether it formed a path dependence based on the observation of the owner's reaction to a particular type of sound. Legal norms are conservative and point more to the past than to the future. This is not to say that the function of legal norms does not point to the future. Rather, legal norms, as the embodiment of social consciousness, are based on social existence. At a time when social existence has not yet completed a substantive change, although legal norms can guide this change, there is no way to determine this change. Even the technological system that exists as a society cannot predict the emergence of ChatGPT, let alone the legal system that relies on the transcoding of information from the technological system. Even if the future is already here, it will not operate strictly according to the design of legal norms, not to mention that the future is also open. Despite the possibility of falling into the fallacy of the slippery slope, the broken window theory needs to be taken seriously. The "finite personality theory" sets up "permanent finiteness", which makes it possible for any existence to become a subject that meets the "finite" in the "finite personality" at a certain time. It is not advisable for society to expect that the present situation should be constrained by imaginary regulations that point to the future.

On the other hand, "the intelligent and human-like characteristics of generative artificial intelligence have gradually become prominent, and they have begun to transcend the instrumental attributes of algorithms and highlight the potential of subjectivity", which provides a realistic example for the justification of "legal personality expansion theory" and "artificial intelligence development theory" with its unique self-attention ability. The functionalist approach of the former makes it impossible for legislators with a "non-recognition" position to give a valid concept of what constitutes "generation", so as to avoid the "subject question" that may arise from it, and not repeat the discussion. The latter further strengthens the social nature of artificial intelligence, and its optimistic prediction is that "once the degree of socialization of robots increases sharply, it will occupy a high application rate in various fields." Then, there is no need to wait for the robot to make any claims to humans, and its owner will naturally appeal to the legislature to define the status of the robot as the subject of rights." However, the delivery riders trapped in the system offer another possibility, and this possibility may be a more realistic future. The large-scale application of artificial intelligence in a certain field, that is, the so-called improvement of socialization, has not brought about the rise of the status of artificial intelligence, but the decline of the status of people in this field. This shift is not difficult to understand, because the subject-object dichotomy inherent in the modern legal system will be judged by the legal system as an object until AI is given the status of a legal subject.The optimistic expectation of the "legal personality expansion theory" and the "artificial intelligence development theory" is that artificial intelligence will behave more and more like humans; But on both sides of the coin, people are becoming more and more like artificial intelligence. The "likeness" of the former expression tends to be "close"; The "likeness" of the latter expression tends to be "approached". But when there is a human-machine connection, approaching and being approached are the same at the phenomenal level, although they are distinctly different in nature. The situation of the delivery rider reveals that "in AI algorithms, individuals are gradually digitized and calculated. When data is processed and integrated, algorithms arrange people according to a variety of automated criteria and assign them meaning." Originally, artificial intelligence, as a pure object, could not give meaning to the subject, and that meaning could only be given by the subject using artificial intelligence. This is also the reason why the Consultation Paper specifically stipulates "provider". However, with the increased interactivity brought about by generative AI, it is possible for a particular AI system to give meaning to the code within its system. Once the AI's self-attention feeds back to the idea that "I can act like a human when I'm like a human", then the self-generation of the algorithm "I can be like a human when I'm like" is no longer an imaginary risk, and the code can indeed complete such a sequence arrangement, but just choose when to do it. Therefore, "non-recognition" has become one of the regulatory means to avoid dissolving human subjectivity.

It's just that this "non-recognition" that runs through the whole has a deeper "recognition" lurking. "Non-recognition" is itself a model of the "recognition" approach. "Non-recognition" is the opposite of "recognition". Because the former is a negative judgment, and the latter is a positive judgment. But this inference is not exact. Just as the jurive code of the legal system is "legal/illegal" rather than "legal/unlawful", the jurive code of the recognition approach is "admitted/not recognized" rather than "admitted/not recognized". Although "non-recognition" has a distinct negative characteristic, the premise of "non-recognition" needs to clarify the criteria for judging "recognition", that is, if there is no "recognition", there is no "non-recognition". "Recognition" and "non-recognition" are both intrinsic to "recognition", and only a determination based on recognition can know what constitutes non-recognition, just as only a lawful judgment based on law can know what is illegal. On the one hand, it is related to the non-recognition of the legal system, that is, in a pragmatic sense, the "non-recognition" proposition of "generative AI is not a legal subject" and the "non-recognition" proposition of "not using the standard of legal subject to determine generative AI" are functionally consistent, both of which exclude generative AI from the "recognition" subject at this stage. On the other hand, however, it is associated with the recognition of the legal system: this being, although it has not yet been affirmed by the legal system, is not excluded from the legal system, and the legal system still maintains a constant interest in it. Everything depends on the criteria recognized.In this sense, the Consultation Paper itself is a practice of the legal recognition approach: the determination of what can become a legal subject comes from the legal system, rather than from the development of the moral system, social interaction system, and scientific and technological system. This is evidenced by the requirement of "provider". "Organizations and individuals that use generative AI products to provide services such as chat and text, image, and voice generation" are defined by law as "providers", where "provider" is a concept that refers to a specific type of legal subject that can provide services using generative AI products; At this stage, under the collection of this subject, it includes organizations and individuals; The law sets the corresponding entity responsibility for this entity, and sets it as the only entity that bears responsibility. However, going back to the two examples already mentioned. Is the provider liable in a hospital incident? Is this provider the maker of intelligent robots or a hospital? It seems that the first person responsible should be the hospital, but should the hospital be liable for moral damages? The test of the situation of food delivery riders is similar, if the platform has already undertaken the basic protection of labor rights, what responsibility does the platform have here, after all, the biggest problem in reality is not the algorithm, but the lack of regular welfare protection.

Although it may be too absolute, the following judgment is generally acceptable: "In the face of generative AI represented by ChatGPT, the objects that should bear the 'main responsibility' are diversified, decentralized, and scenario-based, and it is difficult to accurately delineate the responsible entities only by defining 'service providers' or 'content producers'." This means that in the foreseeable future, if the "non-recognition" is insisted on, then the "provider" itself will be shelved, and continue to rely on other departmental laws to regulate and adjust relevant matters, as in the case of the Personal Information Protection Law, which has been in effect for many years, but there is basically no judicial decision that directly uses it as the basis for adjudication; Or, further subdivision of subjects to deal with the main responsibility of different algorithm-generative AI in different scenarios that lead to infringement of rights, but this continuous subdivision will not only make legal norms unable to be comprehensive because of their abstract nature, but even the tailor-made judicial response will be difficult to keep up with the expansion of the field of application of technology, after all, the judiciary is also conservative. Of course, there is another possibility, that is, to stop being attached to the "horse" thinking, "based on the consideration of expression and convenience of thinking... "Recognizing the legal subject status of generative AI, as legal entities, gods, and the dead do, can solve this problem, but it faces the risk of dissolving human subjectivity." In this sense, if the path of recognition is the only answer to how the legal system recognizes the existence of generative AI, then a deep level of "recognition" will suddenly emerge at some point in the future, and the legal normative system of generative AI based on "non-recognition" will disintegrate from the inside into intangible.

4.The "Odyssey Expedition" of the Legal Redistribution Approach

The legal system is concerned with the subject. Every great change in society will lead to a change in the theory of legal subjectivity. The feudal society disintegrated the slave society, and the slave changed from "human non-subject" to subject; Capitalist society replaced feudal society, property subjects replaced political subjects and confirmed the fiction of legal persons; The socialist society transcends the capitalist society, and the community re-enters the vision of the law, instead of being obsessed with the interests of the individual; In contrast to the communist society of the future, the final chapter of the law will be proclaimed as a union of free men, a way of life that is both an individual and a community. Therefore, since the emergence of AlphaGo, an AI with self-learning and self-evolution, the issue of the legal subject of AI has been considered "the essence and key of all moral and legal issues" and "the logical starting point for designing a future legal system related to AI". Generative AI, with its more unique self-attention, reinforces this willingness to think. But is this path of recognition really the only answer? In other words, only by answering the question of "whether AI is a legal subject" can we design a legal system related to AI? The answer is: not necessarily.

"Social existence determines consciousness" is one of Marx's classic propositions. Since this proposition is often referred to as "social existence determines social consciousness", the full formulation of this proposition is quoted in order to discern the differences between the two expressions:

In the social production in which people live, there are definite and inevitable relations that are not subject to their will, that is, relations of production that correspond to a certain stage of development of their material productive forces. The sum total of these relations of production constitutes the economic structure of society, that is, the basis of reality on which a legal and political superstructure is erected and to which a certain form of social consciousness is adapted. The mode of production of material life regulates the entire process of social, political, and spiritual life. It is not people's consciousness that determines people's existence, on the contrary, it is people's social existence that determines people's consciousness.

This passage contains the mystery of historical materialism. This passage is divided into four parts according to the broken sentence: (1) In the first sentence, as Marx said, the relations of production need to be adapted to the productive forces, and the relations of production are determined by the productive forces, not by the will of man. This has never been disputed. (2) In the second sentence, that is, the superstructure needs to be adapted to the economic structure, and the economic structure is in fact the sum of the relations of production, it can be concluded that the economic base determines the superstructure, which is also undisputed. Due to the limitation of translation, the "zhi" of "adaptation" is not very clear, and it can be understood as "a form of social consciousness that is compatible with the superstructure" or "a form of social consciousness that is compatible with the socio-economic structure". Of course, because the superstructure itself is determined by the economic base, the "definite form of social consciousness", no matter how it is interpreted, can be understood in the final analysis to be adapted to the economic structure of society. The increase in the causal chain only causes the complexity of the determinants, and does not affect the importance of the main determinants. (3) In the third sentence, the mode of production includes the productive forces and the relations of production, and its constraints on all life are also correct. This sentence is actually a summary of the first two sentences. (4) The key lies in the fourth sentence, Marx seems to use a symmetrical structure here, but in fact it is not, the first half of the sentence is about "people's consciousness" and "people's existence", and the second half of the sentence is about "people's social existence" and "people's consciousness". In the essay "The German Ideology", Marx wrote: "Consciousness can only be a conscious existence at any time, and the existence of people is the process of their real life." Thus, "people's existence" is a real life process, which is not exactly the same concept as the social production process of people referred to in the first sentence, and therefore cannot be reduced to production activities alone. More importantly, Marx did not continue to use the concept of "social existence" instead of "people's existence determines people's consciousness". Just as "the existence of people" cannot be equated with "productive activities", "the social existence of people" cannot be equated with the "material conditions of life" referred to in the third sentence, but should be understood as "the sum of living relations". The reason for this is that the second half of the fourth sentence is "people's social existence determines people's consciousness", not "people's social existence determines people's social consciousness", and consciousness can only be a conscious existence, that is, it can only be a conscious life, and social existence determines this conscious life, not a "certain form of social consciousness" determined by economic relations as mentioned in the second sentence.

For Marx, the "relations of life" are actually the "relations of production", because life in the world is nothing more than two kinds of men and women, and the division of labor between men and women is the first division of labor in Marx's view. However, the use of "life relations" instead of "production relations" is precisely a semantic severance of the connection between "social existence" and "material living conditions". After all, when it comes to "relations of production", "material production" is often inhibited, and the former has a tendency to be replaced by the latter, but this substitution is obviously unreasonable. The relations of life, according to Marx's point of view, are determined by the relations of production in the final analysis, but they are by no means equivalent to the relations of production. As for "synthesis", it actually represents an open concept of the set, that is, in essence, there is a lack of an exact number for the number of specific social beings, and "social existence" is a set that can be placed into this set whenever a new life relationship arises, and once this life relationship disappears, it also leaves this set. However, the "life relationship" at this moment is not simply a specific life relationship, but an abstract concept with a conceptual nature, that is, the abstract naming of this life relationship by human beings. Thus, the proposition that "social existence determines consciousness" supports the path of recognition. Why is there a discussion about whether AI is a legal subject? It is precisely because the socialization of artificial intelligence, especially the large-scale application of generative artificial intelligence represented by ChatGPT, is deeply involved in people's daily life, making the "human-machine" association from the traditional one-way feedback of "human output, machine input, machine output, human input" to "human input, machine input, machine output, human output" This two-way feedback has blurred the boundary between true and false information, and profoundly changed the way human beings obtain information and knowledgeOn the other hand, deep synthesis technology will become a key application technology for the combination and integration of virtual and real in the virtual and physical world, and even change the future production mode and lifestyle of human beings", that is, it will change people's social existence, and then the change of social existence will lead to a change in people's awareness of generative artificial intelligence, which will make people have a sense of "generative artificial intelligence can do many things that were originally done by people, and it may probably be the subject".

Acknowledging that the approach ends here, the risk of a Trojan horse reappears. However, the proposition that "social existence determines consciousness" does not stop there. The ultimate point of this proposition is the association of free men, not of any other being. Therefore, the determination of consciousness by social existence is not only a constructive proposition, but also a judgmental proposition, that is, the use of the ultimate "Messiah" to judge whether existing things are reasonable or not. This means that only social existence that can promote the generation of "free man" consciousness is in line with social development, and those social existence that not only promotes the generation of "free man" consciousness but also hinders the generation of "free man" consciousness needs to be vigilant, especially those social existence that ostensibly promotes but actually hinders, that is, alienation, is the main opponent of the ultimate goal of the association of free man. Compared with AlphaGo, generative AI represented by ChatGPT "The 'problem' is not that it actually makes decisions that are unfavorable to people, but that it can 'substitute for or participate in people's decision-making' itself." Therefore, whether the large-scale socialization of generative AI is recognized as a change in social existence does not depend on whether the existence of generative AI changes people's consciousness or whether people realize that AI is like humans, but whether the existence of generative AI promotes the generation of "free man" consciousness. At this point, returning to the example of the delivery rider, it will be found that although the provider has not significantly violated the law, and therefore there is no need to bear the main responsibility in theory, in view of the fact that the delivery rider is trapped in this artificial intelligence, his "free man" consciousness has not been promoted (as for whether it is hindered or not, it remains to be argued, after all, this is a matter of time series difference comparison, and it is not easy to determine whether the delivery rider has the consciousness of a "free man" before he is embedded in this AI system). Therefore, this artificial intelligence needs to be vigilant and not allowed to develop. It is only at this time that the setting of Article 4 of the Consultation Paper is meaningful. As an advocacy norm, its function should be at the back end, that is, whether or not to "respect social morality, public order and good customs" is decided by the people embedded in the system, rather than by observers outside the system. Otherwise, it will inevitably be questioned as "over-ethical legal norms".

The path can be considered a completely new mode of recognition, as it does acknowledge the capabilities of generative AI; However, this is not a path of recognition in the traditional sense, because it does not recognize the subjectivity of generative AI and give it an identity. This acknowledgment has nothing to do with what AI "is", but about the person embedded in the AI system, and what is considered is what the AI "does". Technological innovation has shaped a new world of economic production and social reproduction, a world in which employment is more precarious and everyday life is becoming more diverse. Generative AI, as a representative of new technologies, points to economic production; The field that can be touched by legal regulation belongs to social reproduction. This means that what really needs to be legally recognized is not the AI, but the people embedded in the AI system. This sounds absurd, after all, as long as a human being, he has already acquired the status of a legal subject, and there is no need to recognize the identity of this legal subject (of course, this does not mean that the identity of this subject does not need to be recognized, which is another issue, because as the sum of life relations, identity is one of the important factors). But in fact, with the intervention of artificial intelligence, an increasingly "human-like" system, the people embedded in it are treated differently because of their own differences in ability—or difference in economic ability, or difference in computing power, or difference in knowledge ability—in the same algorithm-dominated artificial intelligence system. In daily life, one of the most typical examples is the lack of access capabilities of the elderly group using elderly machines in many artificial intelligence application scenarios. This lack is not the result of their active pursuit, but the inevitable development of the digital economy, but they not only lack the identity of "digital person", but also these life scenes are "forced" to be stripped out of their life world, resulting in the lack of "natural person" identity. "Digital poverty" is real. If we think about it a little deeper, the difference in capabilities between the developers and users of generative AI and the users of generative AI is even more significant, after all, developers and users have the power to "control AI" that the users do not have. "Knowledge/information/technology asymmetry is becoming a ubiquitous state through the pervasive information infrastructure", and the service users only have the choice of "accepting" and "not accepting", and even if there is "feedback" and "correction", it is an afterthought. In view of the "self-attention" nature of generative AI, the "control" here is not control in an absolute sense, and even generative AI may innovate and reshape the rules of the system by itself without the developer or user. However, this does not mean that service users no longer need to encounter different degrees of algorithmic black boxes when carrying out "human-machine" related actions, and even further means that even developers or users may encounter algorithmic black boxes in generative AI systems. Although "algorithms in a black box do not necessarily mean that they are unjust",but, since in a digital society, the algorithmic black box is an inevitable social existence, and the law must correct the injustice it produces to avoid it affecting people's consciousness. In this case, legal regulation is essentially a "redistribution" of the status of the person in between.

If the phrase "code is law" has been deeply rooted in the hearts of the people and has become the consensus of both the legal system and the technological system, then generative AI products are nothing but institutional norms composed of a series of codes. The law is powerless against code, especially in the generative AI phase (including the post-generative AI phase, after all, the overall trend of technological progress is irreversible from the perspective of all mankind), but the law must do something about the adverse effects of this system: to seek institutionalized redress for the harm of institutionalization, and thus to overcome the subordination of status. The subordination of status is not the subordination of artificial intelligence to people, which is under the dichotomy of subject and object, and is not a problem that needs to be solved; Nor is it the subordination of man to artificial intelligence, because the law has never recognized this and this is the result of alienation in the first place; It is whether the different people embedded in the AI system have subordination between them, or whether the subordination between people has been strengthened or weakened after the addition of the new object medium of the AI system. This is the core of what the Redistribution path adjusts. Therefore, the "identity inequality" between humans and artificial intelligence should no longer be the focus of legal attention, and the focus of legal attention has been transformed into whether different people (legal subjects) imply the unequal redistribution of "identity positioning" for different people when they are related to the same artificial intelligence (it doesn't matter whether the subject or the object). Taking the hospital incident as an example, if the hospital intelligent robot also "tells a joke" to the family members of other patients who are in a bad mood, then at this time, the generative AI does not have the problem of unequal identity positioning for patients, so there is no need to activate the legal system to correct it; Of course, the correction within the technology system can still continue, but this behavior should be understood as an update to the "imperfect" system, rather than an apology for the "harm". Similarly, in the case of delivery riders, they are trapped in the system, and the inequality between them and the platform is "enhanced" rather than "weakened".The inequality between the riders as labor and the platform as an employer is inevitable at this stage, but if the platform's AI system traps riders in an increasingly involuted situation, but in an increasingly casual situation, then this system is desirable. This may sound counterintuitive at first glance, and there seems to be no reason for the management to do so, but the production of the AI industry is likely to come to this point. According to a model based on the theory of "capital-skill complementarity", "the intermediary utility of capital-intensive human capital and income distribution inequality is significantly higher than that of labor-intensive". According to the test of a continuous time heterogeneous individual dynamic general equilibrium model, "the Hicks-neutral technological progress brought about by artificial intelligence will simultaneously bring about the increase of economic output, labor wages, and real interest rates, and the inequality of wealth distribution in the economy has a tendency to decrease". This means that, under the premise that the existing relations of production have already defined the inequality of status, the correction of this inequality depends on the further development of artificial intelligence technology rather than limiting it. Given that "the primary and most obvious objective of social welfare provisions is to prevent poverty" and to seek a more equitable distribution of resources, the legal regulation of generative AI is aimed at preventing the reproduction of digital poverty, not facilitating the reproduction of algorithmic code. The latter is a problem of the scientific and technological system, and it is necessary for the law to recognize its own functional limits and stop there. As a result, generative AI technologies that cannot reduce or even expand digital poverty have been judged to be "unjust" in the "redistribution" path. An AI that is functionally sufficient to replace humans is likely to be "recognized", or even more likely to be "recognized", that is, justice that conforms to the path of recognition; However, if it detracts from the existing subjectivity of the person it replaces, then it is unacceptable on the path of "redistribution". This kind of "injustice" belonging to the redistribution path can only be judged in the context of production or reproduction of the AI industry, because only then will the "AI system" and the legal subject as the "service object" be related, and will it involve the possible change of the social identity of the service object, that is, whether it triggers the reproduction of digital poverty. Correspondingly, how to develop AI technology does not need to be specifically legislated, and the review of science and technology ethics belongs to another level of matters, for example, the "Proposal on Ethics and Governance of Generative AI" released at the 2023 World Artificial Intelligence Conference does not have a legal level of normativeness. Only in this way can the words "support" and "preferential encouragement" in Article 3 of the Consultation Paper be truly possible; Otherwise, it may be impossible to achieve "trustworthiness" due to too much attention to the risks of AI technology, which in turn will increase unnecessary prior review or entry thresholds, which is not conducive to scientific and technological innovation. As for whether this artificial intelligence technology product "resembles" a person or "does not like" a person, it is just a rather charming siren banshee song at this time, and there is no need to pay too much attention.


Would the redistribution path be a more appropriate legal regulation than the recognition path? There is no definitive answer at this stage. This is not only because there is no specific legal norm on which the redistribution path is based, but also because the legal system under the redistribution path has not yet been effectively connected to the scientific and technological system as a draft for proposals that has not yet come into effect. Everything needs to be patiently waiting for the feedback from the legal system after the official promulgation of the management measures. The ultimate destination of this feedback is justice. "Modern justice has the attribute of 'decisiveness', and is no longer a vassal of legislation, or a simple 'application' of legislative norms." Without the invocation or support of a valid judgment, the dispute over the path of recognition or redistribution or whatever, remains open. However, if we focus only on the legislative level, the redistribution path has certain advantages. According to the redistribution path, the draft for comments (or future finalization) can justifiably add a series of technological/everyday words such as "self-attention", "autonomous" and "automatic", such as "refers to the technology that generates new text, pictures, sounds, videos, codes and other content based on multi-headed self-attention algorithms, models, and rules", highlighting the essential attributes of this new thing from the scientific and technological system, realizing the normative nature of legal concepts, and distinguishing between generative AI and previous AI. There is no need to worry about triggering a crisis of subjectivity. More importantly, if the recognition model is retained, then when higher-level artificial intelligence emerges in the future, the question of whether it can become a legal subject will be repeatedly treated as a fundamental issue, and the redistribution path avoids this controversy: human law does not need to pay attention to the "doomsday war" between carbon-based organisms and silicon-based organisms, and human legislators, as carbon-based organisms, should be concerned about how to avoid their own life relationships being limited by silicon-based organisms, and at the same time, with the assistance of silicon-based organisms, Enhance the ability to live as a carbon-based organism. Perhaps, this is where the legal value of regulating new things in an intelligent society lies, after all, it is a law that belongs to humans, and it is a law about generative AI, not belongs to generative AI.