Location : Home > Resource > Paper > Theoretical Deduction
Resource
LEI Lei | Can Judicial Artificial Intelligence Achieve Judicial Justice?
2024-10-22 [author] LEI Lei preview:

[author]LEI Lei

[content]

Can JudicialArtificial Intelligence Achieve Judicial Justice?



*Author: Lei Lei

Dean and Professor, Law School, ChinaUniversity of Political Science and Law



Abstract: judicial artifcial intelilgencehas comprehensive advantages comparative to human judges, but also has its owninsurmountable weaknesses, that is, it cannot deal with with uncertartificialintelligencenty, it does not have human common sense, and it is unable to makevalue judgments. The basic operation logic of judicial artifcial inteligence isprediction based on historical data, including both the prediction of similar casesbased on historical data about precedents, and the personality prediction basedon historical data about personal judgments of courts or iudges. In theprediction of similar cases, judicial artifcial intelligence helps to achievejudicial unity, but it may not achieve formal justice in the sense of “likecases be treated alike”or adjudication according to the law, and it is morelikely to contradict substantive justice. In personality prediction, therealistic logic of “judge profiles” may run counter to substantive justice, andthe business preference logic of “buying and seling judges” will inevitablyerode the idea of procedural justice. Therefore, judicial artifcial inteligencecannot achieve judicial justice, and its appropriate positioning is currentlyan auxiliary means of judicial adjudication activities.

Key words: judicial artifcial inteligence ;substantive justice ; procedural justice ; prediction of similar cases ; personalityprediction


The application of artificial intelligencetechnology in the judicial field has become an important fulcrum of China”snational informatization development strategy. The Outline of the NationalInformatization Development Strategy, released in July 2016, proposes theconstruction of“smart courts,”which promotes the disclosure of law enforcementand judicial information, and promotes fartificial intelligencerness andjustice in the administration of justice. In April 2017, the Supreme People”sCourt issued the Opinions of the Supreme People”s Court on Accelerating theConstruction of Smart Courts, which points out that a smart court is anorganizational, construction and operational form in which the people”s courtmakes full use of advanced information technology systems to support onlinehandling of all business, disclosure of all processes in accordance with thelaw, and all-round intelligent services, so as to realize fartificial intelligencerjustice and justice for the people. At the Fifth Session of the ThirteenthNational People”s Congress, the Report on the Work of the Supreme People”sCourt included as an independent panel “in-depth promotion of the reform of thejudicial system and the construction of smart courts”as “adhering to thedual-wheel drive of institutional reform and scientific and technologicalinnovation”“The people”s sense of access to justice has been increasing”.

Some judges have pointed out that, in thejudicial field, “the use of artificial intelligence can greatly reduce therepeated use of manpower, reduce the interference of human factors in theadministration of justice, and greatly improve the efficiency of judicialactivities”. This view is highly representative. There is little dispute thatjudicial artificial intelligence is important for the improvement of judicialefficiency and the solution to the contradiction of “too many cases, too fewpeople”. But there is doubt that judicial artificial intelligence can realizethe value of judicial justice? Of course, if it is recognized that judicialjustice is a kind of value demand with different degrees of realization, then,in what sense and within the scope of judicial artificial intelligence canpromote judicial justice? Unlike judicial efficiency, which can be tested byempirical research, judicial justice is martificial intelligencenly atheoretical judgment, which needs to penetrate into the technical logic of artificialintelligence to see whether it matches the concept of judicial justice.


1. What is “judicial justice”?


Judicial justice in the broader senseinvolves all areas and aspects of the judicial process, and is related to theauthority of the judiciary, the degree to which judicial activities are recognizedby the ethics of society, the macro-construction of the judicial system, andthe rationality of the judicial process. On the contrary, judicial justice inthe narrow sense is martificial intelligencenly related to the activity ofjudicial adjudication. Judicial adjudication is a value-oriented activity, andjudicial justice is the value appeal of judicial adjudication. It is generallybelieved that to judge whether modern justice has the value of justice, is tosee whether it has both the value of the justice of the judicial result, andthe value of the justice of the judicial process. As a result, judicial justicecontartificial intelligencens two types, namely, substantive justice andprocedural justice. And it is generally believed that substantive justice isthe fundamental goal of judicial justice, procedural justice is an importantguarantee of judicial justice. Procedural justice focuses on the fartificialintelligencerness of the litigation process, and its most important principlesare procedural autonomy and equal treatment of the parties. The judge in theprocedure should achieve the same attitude, the same rights and the sameopportunities for all parties. Therefore, whether the parties have been giventhe opportunity to participate in the litigation, the litigation process, theright to state, evidence, debate whether the same attention, the judge on bothsides of the same treatment without any bias, whether equal consideration andevaluation of the clartificial intelligencems and evidence of the two parties,will become the martificial intelligencen criteria for judging the proceduraljustice or not. The procedural steps and modalities are designed, on the onehand, to ensure a predictable outcome of substantive justice and, on the otherhand, to be based on the inherent intrinsic value of the procedure (proceduralvalue).

Substantive justice, on the other hand, ismore related to the outcome of a decision, and specifically includes bothformal and substantive justice. Formal justice encompasses both equality, thatis, treating the same things in the same way, and the stability of the law. Thestability of the law has several meanings: first, citizens can obtartificialintelligencen information about their legal status and the legal conditions ofthat status on the basis of the law (knowability); second, citizens can relyconclusively on such correct information to act (reliability); third, citizenscan foresee the possibility of specific legal decisions made by governmentofficials (predictability); and, finally, government officials must follow theexisting positive law and must act in an ex ante manner to ensure that the lawwill not be violated. effective positive law, and must base their judicial andlaw enforcement decisions on general legal norms that have been determined inadvance, and their discretion is subject to constrartificial intelligencents(constrartificial intelligencenability). At the center of these layers is therequirement of predictability. Formal justice translates into the obligation ofjudges to “judgment according to the law,”because adjudicating in accordancewith the law originally implies the requirement to handle cases in accordancewith general rules promulgated in advance, and “promulgated in advance”impliesthat adjudicative standards are “visualized,”while “visualization”ofadjudicative standards is not. Visualization”, and ”general rules” means thesame case, the same sentence. Of course, the other side of the coin is thatjudges are bound by the law. This means that the ways and means of realizingthe values expressed through the enactment of the law are binding on the judge,because the unpublished provisions, the need for legal continuity, or the legalcontradictions that arise in a specific case make it inevitable for the judgeto resort to the legislator”s evaluation (purposive interpretation). At thesame time, the direct penetration of judges into the purpose of the rules beingapplied makes them participate in the shaping of the law.

Unlike the legislator, udge does not shapethe case in a general sense, but on a case-by-case basis. As the German juristLarenz puts it: “Jurisprudence is concerned not only with clarity and thestability of the law, but also with the realization of ”more justice” byworking progressively on specific detartificial intelligencels.”Thus, judgesneed to consider the application of general rules in individual cases, takinginto account the many elements of the context of application, including theconcept of justice in the light of the social effects of the outcome of thedecision. This is reflected in the judge”s obligation to take into account “individualjustice”. judgment in the case of justice compared with the general concept ofjustice has two characteristics: first, it is concrete justice rather thanabstract justice, and secondly, it is legal justice rather than pure ethicaljustice. On the one hand, case justice is the substantive justice embodied inspecific cases. Substantive justice involves substantive values or moralconsiderations. Substantive values or moral considerations are of a certartificialintelligencen scope or limited, they should be derived from the judgment in thecountry or region in which the prevartificial intelligenceling or in line withthe majority of people”s moral concept of martificial intelligencenstreamvalues or social justice, rather than non-judicial cases of the concept ofjustice. However, the concept of social justice is difficult to articulate or “encode”asa clear set of rules, and it is often a form of empathetic justice thatincorporates different levels of social reasoning. Empathic justice is apluralistic and dynamic justice. Here, the context of each case is particularlyimportant. Different contexts determine that empathic experiences andexperiences are naturally different, which results in the complexity anddiversity of the connotation of justice. One of the most central features ofjustice in individual cases lies in its “case-by-case”nature. In other words,substantive justice in the form of case-by-case justice often cannot be dealtwith in a generalized and rule-based manner, and it always has to face thedifferent circumstances of each individual case, and the uncertartificialintelligencenty that has not been foreseen beforehand. The practical wisdom,expedient consideration and empathetic ability of “adapting to the time,”“adaptingto the place,”and “adapting to the situation”based on uncertartificialintelligencenty are precisely the important features of substantive justice injudicial activities. This is an important feature of substantive justice injudicial activities. On the other hand, justice in individual cases is legaljustice, which has to take into account both the moral concepts of the societyand the institutionalized values, that is, the value judgments supported by thelegal system, such as the basic principles and values stipulated in the generalprovisions of the codes and individual laws.

Judicial justice is therefore a comprehensivevalue in the field of justice, encompassing both substantive and proceduraljustice, and both formal and substantive justice.


2. The operational logic and flaws of judicial artificial intelligence


There are a large number of differenttechnical approaches to artificial intelligence research, among which the twomost studied and dominant approaches are brute force and trartificialintelligencening. The basic principle of the brute force method is: first,establish a search space based on the precise model of the problem; second,compress the search space; third, enumerate all options in the compressed spaceto find a solution to the problem. The basic premise of the brute force methodis that there is a well-defined precise model of the problem to be solved, andthe model defaults to a certartificial intelligencen symbolic model, withlogical formalization, probabilistic formalization and decision theory formalizationas the dominant model. The brute force method includes two martificialintelligencen types: reasoning method and search method. The search method isto search in the state space (such as Monte Carlo tree search), and thereasoning method is to reason on the knowledge base, usually consisting of aninference engine and a knowledge base. The inference engine is a computerprogram responsible for reasoning developed by a professional team, and theknowledge base needs to be developed by the developer for differentapplications (expert knowledge base). The working principle of the trartificialintelligencening method is to use an artificial neural network to represent theinput and output format of a given problem (meta-model), and then use a largeamount of labeled data to trartificial intelligencen the meta-model, that is,adjust the connection weights of the artificial intelligence neural network toobtartificial intelligencen a specific sub-conformal model. This trartificialintelligencening follows the principle of data fitting. Each sample in the trartificialintelligencening set contartificial intelligencens the input value and theexpected output value. During the trartificial intelligencening process, thedeviation between the output value of the trartificial intelligencenedartificial neural network and the expected output value labeled by the trartificialintelligencening sample is repeatedly compared, and the parameters of themeta-model (i.e., the connection weights in the artificial neural network) areadjusted using the supervised learning algorithm to try to minimize the overalldeviation. It can be seen that the brute force method uses knowledge andreasoning to solve problems, requiring the preparation of relevant knowledgebases for a certartificial intelligencen application scenario, and then usingthe inference engine to answer questions; the trartificial intelligenceningmethod requires first collecting and making trartificial intelligencening datasets, trartificial intelligencening a qualified neural network, and then usingthe network to answer questions.

The brute force method uses explicitlyexpressed knowledge to reason to solve problems, so it is interpretable, whilethe trartificial intelligencening method uses artificially labeled data to trartificialintelligencen artificial neural networks (or other implicit knowledgerepresentation models), and solves problems with trartificial intelligencenedartificial neural networks, which are not interpretable. Applying the strongand trartificial intelligencening methods to judicial adjudication, two kindsof artificial intelligence operation methods are generated accordingly: one isthe algorithm of explicit coding and closed rules, which realizes thesimulation of human legal reasoning and applies it to the decision-making ofjudicial adjudication through the legal expert system; the other is the machinelearning algorithm, which is trartificial intelligencened through the analysisof big data and discovers the inherent law of human judicial adjudication andapplies it to the future adjudication of the Prediction. The latter is aproduct of the combination of artificial intelligence and judicial big data.

In the big data era, judicial artificialintelligence operates on the basic principle that open judicial data isprocessed through natural language, fed into machine learning algorithms, andthen results in one or more models for predicting or anticipating thelikelihood of winning or losing a case. The goal of this algorithm is not to reproducelegal reasoning, but to find correlations between parameters in a judgment. Infact, all that a machine learning algorithm can do is to correlate a set ofobservations (inputs) with a set of possible outcomes (outputs) in an automatedway using multiple predefined configurations. It constructs categorical linksbetween the different vocabularies that make up a judgment: a specificvocabulary set in the input phase (characterizing the facts of the case)corresponds to a specific vocabulary set in the output phase (characterizingthe conclusions of the decision). Its basic principle is similar to that of amachine translation system such as Xunfei, which can only make a possibleestimate of the best match between a set of words and a completed translation,but cannot really “understand”the meaning of the sentence being processed. Thedifference between the forceful method and the trartificial intelligenceningmethod is only that: in the former, which input values (factual features of thecase) are related to the output value (the conclusion of the decision) arepre-set by the computer program or by human beings, and the judicial artificialintelligence is only responsible for calculating in accordance with the setmodel; in the latter, the input values (factual features of the case) relatedto the output value (the conclusion of the decision) are acquired by the artificialintelligence through trartificial intelligencening, as to what model is basedon. In the latter case, the input values (factual features of the case) relatedto the output value (decision conclusion) are learned by the artificialintelligence through trartificial intelligencening, and the modeling algorithmbased on which the result (output value) is obtartificial intelligencened isopaque to the outside. However, the basic operating logic of both is the same,thatis, value-neutral passive application based on closed scenarios.

This also leads to the following threecharacteristics of existing judicial artificial intelligence technology (artificialintelligence technology in the general sense): first, closedness. In thereasoning method, the closedness is manifested in the fact that there exists afixed, limited set of knowledge that can completely describe a givenapplication scenario. In the trartificial intelligencening method, closure ismanifested as: a fixed, limited set of representative data with manual labelingis avartificial intelligencelable to fully describe a given applicationscenario. Therefore, an application scenario with closure can be guaranteed tobe successful by applying the strong or trartificial intelligencening methodtechniques of artificial intelligence; if it is not closed, the application isnot guaranteed to be successful (and may not fartificial intelligencel).Second, passivity. Existing artificial intelligence technology does not havethe ability to actively apply, can only be passively applied. Even if the trartificialintelligencening method, artificial intelligence has the so-called “autonomous”learningability, or even deep learning ability, it is only in the human given the inputand output format of the problem, pre-labeled trartificial intelligenceningdata sets and in closed scenarios (such as “playing Go”) This so-called “self-learning”canonly be achieved when humans are given a problem with a pre-labeled trartificialintelligencening data set in a closed scenario (e.g. “playing Go”). Thisso-called “self-learning”is completely pre-arranged by the designer, and is notthe usual human self-learning. Third, value neutrality. That is to say, artificialintelligence technology itself is not good or bad, but the way people apply itdetermines its goodness or badness. In the case of reasoning methods, forexample, whether or not the answer given by a reasoning machine will be harmfulto a person depends entirely on whether or not the knowledge base contartificialintelligencens knowledge that may imply undesirable consequences. Since theknowledge base is written by a human being, it is the designer who determinesthe goodness or badness of the specific application of the reasoning method.

Based on the above, it is easy to see thatwhile judicial artificial intelligence has advantages that are incomparable tothose of human judges, namely, the swiftness and accuracy of data search,comparison, and correlation, it also has insurmountable shortcomings:

The first is the inability to cope withuncertartificial intelligencenty. The closure required for the successfulapplication of artificial intelligence techniques makes them vulnerable in theface of unanticipated inputs. There is no way around manual modeling, and thethree challenges that uncertartificial intelligencenty poses to the robustmethod are also challenges to its modeling: first, the uncertartificialintelligencenty of the object. In the real world, there are often manyunpredictable “variants”of an object, and it is not engineering feasible to tryto exhaust all variants of an expected object in all possible applicationscenarios in modeling. For example, although a typical case can be establishedfor “self-defense”, it is impossible to exhaust all scenarios of self-defensein advance. Second, the uncertartificial intelligencenty of attributes. In thereal world, attributes are often ambiguous and scenario-dependent. Once aformalized description of any benign definition of a property is given, itmeans that some of the possible scenarios of the property are artificiallylimited and some other possible scenarios of the property are discarded.Therefore, modeling with formal methods cannot, in principle, guarantee thecoverage of all possible scenarios that may be encountered in practicalapplications. For example, what does it mean for a case to be “aggravated”?Such open-ended concepts of evaluation are difficult to fix into a limitednumber of operational criteria. Third, the uncertartificial intelligencenty ofassociation. Phenomena in the real world are interrelated, and a phenomenon canhave different attributes in different scenarios, be associated with differentobjects, and have unpredictable ways of association. The model of an artificialintelligence system cannot predict or describe all possible associations in thereal world, and when the artificial intelligence system encounters associationsthat are not expressed by the model in actual operation, it cannot respondeffectively. For example, a reasoning model may associate the phenomenon of “shootingat another person”with “intentional homicide”or even “self-defense”, but notwith “combat”or “self-defense”. The associations “combat”or “execution”are notincluded. The trartificial intelligencening method bypasses manual modeling,but its performance depends on the “quality”of the data and its manual labeling(data + labeling), which is guaranteed by the “sampling consistency assumption”,thatis, the consistency between the probability distributions of the whole sampleand the probability distributions of the actual samples. The quality assurancecomes from the “sampling consistency assumption”, that is, the consistencybetween the probability distribution of the whole sample and that of the actualsample. However, the actual samples may not contartificial intelligencen the necessaryfeature points, so this assumption cannot be guaranteed in practice. Theoccasion of judicial adjudication is such a non-closed occasion, which has toface many uncertartificial intelligencenties in the real world.

Secondly, it does not have human commonsense common sense. Human beings have complex knowledge and complex reasoningability, and a large part of this belongs to the “universal”common sense commonsense. A person may have multiple identities, such as “judge”(in the field ofrefereeing activities) or “chess player”(in chess tournaments), but he is firstand foremost a “human being”. No matter what field of activity he is engagedin, in addition to his professional knowledge and competence, he brings to anyactivity he undertakes the social common sense and common sense that hepossesses as a human being. This common sense and common feeling belongs to theunderlying knowledge and logic of the real world that cuts across any field ofexpertise, and in many cases tends to be tacit. But artificial intelligencedoesn”t have this kind of common sense, so Alpha Dog can beat a human chessplayer (because it only requires specialized knowledge and calculations aboutthe odds of winning), but it “doesn”t know”that the pieces are not edible.Because humans and artificial intelligences are completely different “species”,the difficulty of doing things is often the opposite - what is easy for humansis often hard for artificial intelligences, and what is hard for humans (e.g. PlayGo) is often easy for artificial intelligences. So the fact that an artificialintelligence beats humans at chess is by no means the same as an artificialintelligence beating humans at something that is easier for humans. So injudicial activities, even simple cases that require only common sense judgmentmay be difficult for an artificial intelligence, because in the final analysis,there is no general artificial intelligence that can fully simulate humanintelligence (that computes on some areas of common sense).

Finally, it is unable to make valuejudgments. Existing artificial intelligence technology does not autonomouslyform “value judgments”, let alone make decisions based on such value judgments.Lawyers do not see legal rules as static legal formulations, but rather asmeans to a specific end. On the contrary, an artificial intelligence is unableto understand the “meaning”of different arguments and the “for”or “agartificialintelligencenst”relationship between these arguments and a particularconclusion. Especially in difficult cases, judges often have to go beyond thetext of the law and make complex value trade-offs. Value trade-offs are notcomputational and cannot be quantified or codified. So difficult cases oftenbecome anomalies that algorithmic systems cannot anticipate or respond to.

Overall, judicial argumentation consists oftwo steps: first, linking consequential facts to causal facts, and second,connoting such causal facts to the constituent elements of a norm and linkingparticular legal consequences to them. The two associations involve causationand attribution, respectively. Judicial artificial intelligence is in fact thetwo steps reduced to a simple data association, that is, from the resultantfacts (input values) and specific legal consequences (output values) directlylinked to the general algorithmic rules. Such associations are not “reasoning”butrather predictions based on historical data. There are two types of predictionsmade by judicial artificial intelligence: one is to predict the outcome of thecurrent case based on the historical data of all previous cases of the sametype (same case prediction). Who the specific subjects (judges) were whodecided these previous cases of the same kind is not relevant for thisprediction. Another type of prediction is to predict the outcome of the currentcase based on the historical data of the court or judge’s individual decisions(personality prediction). For example, commercial companies have launchedjudgement big data application products based on the judge’s “profile”,that is,using the judge’s past trajectory of decisions on similar cases to analyze andpredict his or her decision-making behavior. The relationship between these twotypes of prediction and judicial justice is discussed below.


3. Prediction of similar cases: does judicial unity equal judicialjustice?


Prediction of similar cases is based on thehistorical data of decisions in similar cases. It is often assumed thatjudicial artificial intelligence helps to achieve same-case adjudication, andthus same-case adjudication characterizes judicial fartificial intelligencerness.However, neither judgment is valid.


3.1 Does judicial unity mean “like cases betreated alike”?

As mentioned before, the applied principleof judicial artificial intelligence is prediction based on historical data. Inother words, it centers judgments on the imitation of past decisions, that is,it holds a mindset that history determines the future. And this kind ofthinking is in line with the criterion of closure that must be met by thetechnical conditions of artificial intelligence results. Specifically, anapplication scenario is closed to strong laws if it meets the followingconditions: (1) the design specification of the scenario can be fully describedin terms of a finite number of deterministic factors (variants), while theother factors can be ignored altogether; (2) these factors collectively adhereto a set of domartificial intelligencen laws that can be adequately expressedby a single artificial intelligence; and (3) with respect to the designspecification of the scenario, the The predictions of the above artificialintelligence model are close enough to the actual situation. An applicationscenario is closed to a trartificial intelligencening method if it meets thefollowing conditions: (1) there exists a complete and defined set of trartificialintelligencening evaluation criteria that adequately reflects the designspecifications of the application scenario; (2) there exists a finite definedrepresentative dataset where the data can be representative of all the otherdata in the scenario; (3) there exists an artificial neural network ANN and asupervised learning algorithm, and after trartificial intelligencening the ANNwith this algorithm and the representative dataset, the ANN will satisfy allthe requirements of the evaluation guidelines. In the application scenario ofjudicial adjudication, there is limited historical data of similar cases forthis point in time when the pending case is adjudicated. The closure conditioncan ensure that the adjudication based on limited historical data martificialintelligencentartificial intelligencens uniformity,that is, uniformadjudication (judicial unification) is achieved. But the question is, doesjudicial uniformity mean that the same case is decided in the same way? Notnecessarily so. The key is what “the same case”can not or should not bedetermined by the artificial intelligence system itself, here are bothtechnical and theoretical reasons.

The technical reason still lies in the uncertartificialintelligencenty challenge mentioned above. This can be elucidated through theoperational model of artificial intelligence. The operational model of artificialintelligence involves three layers of space, namely the reality layer, the datalayer and the knowledge layer. The bottom layer of which is the data layer,which is the human reality world that is very complex, fuzzy and figurative.The middle layer is the data layer, in which the data is obtartificialintelligencened from the reality layer through various means of data collection(e.g., manual collection and machine perception), and this layer is abstractand formatted. During the data collection process, a portion of the informationis collected while an infinite amount of information is discarded in reality.On the data layer, knowledge can be obtartificial intelligencened after manualmodeling or through machine learning, which is structured and contartificialintelligencens semantics. On the knowledge layer natural language processing,reasoning, planning, decision making etc. can be done. Here, there are two waysto leap from the reality layer to the data and knowledge layers, one is throughmanual construction and the other is through autonomous machine perception.

In the case of manually constructeddatabases and knowledge bases, the completeness of the database, or thecomprehensiveness of the sampling, affects the judgment of “same case” . Thisis because the reliability of machine adjudication depends largely on thequality of the data it uses and the choice of machine learning techniques. Forexample, in the current machine-learning criminal case base, basically all ofthe sample is guilty verdicts, while the number of not guilty verdicts in Chinais in fact so low as to be “zeroed out”. It is virtually impossible tointelligently predict acquittals in a database built on samples of guiltyverdicts. In other words, it is still subjective human beings who decide whichcase data will be “included”in the sample database and thus become machinelearning samples. Of course, it should be noted that “labeled samples”are onlya feature of classical trartificial intelligencening methods. But the trartificialintelligencening method in the era of big data may no longer any manual labeling.But in any case, the application of the trartificial intelligencening method injudgment-making will always operate on the basis of a judicial case base, whichin any case depends on human beings. In another approach, intelligent robotsautonomously perceive the real world, acquire data, extract knowledge from it,and utilize the knowledge for understanding, reasoning, planning, anddecision-making to produce robotic actions that are executed in the realitylayer. The operation of intelligent robots forms a complete closed loop, i.e.,from the reality layer to the reality layer, so the uncertartificialintelligencenty contartificial intelligencened in the reality layer will have anon-negligible impact on the robots. Here, the problem model formed by the artificialintelligence system in the knowledge layer can only cover a part of the realitylayer, and at most produce correct solutions within its range. In the face ofunintended inputs that do not fall within this coverage, it is likely toproduce incorrect solutions, e.g., an artificial intelligence system for skindisease diagnosis would “diagnose” a rusty old truck as having measles. Thereare no comparable “judicial robots” that can directly perceive (and hear) realcases and make decisions, so this path is not feasible, at least for the timebeing, in the judicial domartificial intelligencen. Of course, the technicalproblem is a technical problem because it has the possibility of being solvedor close to being solved. For example, with the creation of a full sample casebase, the sampling problem may be solved to a significant degree. And then, forexample, as deep learning capabilities are further improved, the scope of theproblem model of the artificial intelligence system to cover the reality layermay become larger and larger. However, as long as generalized artificialintelligence is not created, the possibility of judging “rusty old truck” and “measles”as the same case will still exist.

Comparatively speaking, the theoreticalchallenges that judicial artificial intelligence has to face are morefundamental. Theoretical reasons are martificial intelligencenly due to thevalue-loaded nature of the “same judgment” judgment itself. The “same case,same judgment” in the “same case” refers to “the same kind of case”, that is,belonging to the same type of cases. There are no two identical cases in theworld, and there is no inherent, substantive or original “same case”. “Samecase” or“different cases”depends on the perspective of judgment, while the legal“same case” judgment depends on the perspective of the law. Therefore, whethertwo cases belong to the same type, martificial intelligencenly depends onwhether they have the relevant similarity, and the relevant similarity of thejudgment standard is provided by the law itself. Law is not only a system ofwords, but also a system of meanings, which attributes specific legalconsequences to specific factual elements. The so-called “relevant similarity”,not only refers to the existence of the same elements on the factual level ofthe two cases, but also refers to the two cases in the sense of “these sameelements are related to the legal consequences of” by the law equivalentevaluation. Whether they can be evaluated as equivalent by the law depends onthe legal purpose behind the legal text. The diversity of possible descriptionsof the facts of the case in relation to the purpose of the law is controlled,i.e., the relevant descriptions for the purpose of identifying the case arelimited to those already contartificial intelligencened in the existing law.The “same case” is a case that can be subsumed under the same rule of law, i.e.a case that fulfills the same legal description. This requires that those whoapply the law understand the purpose or meaning of the legal text. Only thosethat have the same meaning in terms of the point that the law seeks to pursueor evaluate, i.e., a homogeneity of meaning, are of the same type (same case).Meaning is not the external physical characteristics of things, so the typejudgment is not a “material way of thinking”, but a meaning of the same search;not a number of cases or even all of the single column of the same features,but by a combination of features of the composition of the “holistic care “.

Artificial intelligence is precisely unableto carry out holistic judgment and care of meaning. This is because an artificialintelligence system, while it may be able to achieve the best possible match tothe data, cannot “understand” the meaning of the utterances it is processing.Cognitive computing technology cannot read text in the sense that a human beingreads it; it has the technology to process it intelligently, to recognize thoseelements that are relevant to the question, and to draw the user’s attention tothem in an appropriate way. This is highlighted by the fact that it cannotavoid establishing “false correlations”, i.e., similarities in factual featuresbetween two cases that have no legal significance or should not be associatedwith legal consequences, which the machine learning algorithms take as aprerequisite for “linking” the legal consequences. “linking” to legalconsequences. For example, if elements such as “black,” “female,” etc. werepresent in (a number of) prior cases (all of them), and were present in thepending case, an intelligent system would be likely to recognize them asrelevant features and link the legal consequences identified in the prior caseto the pending case and adopt them as a prevartificial intelligenceling law(e.g., “resentencing black women for fraud”). In fact, however, neither therelevant legal norms nor the judges who handled the prior case intended them tobe treated as factual features relevant to the outcome of the judgment. This iswhere the problem of “algorithmic discrimination” arises. To be precise, wecannot say that “algorithmic discrimination” is a real “discrimination”,because when an intelligent system makes such a link, it does not intend to doso - it does not have the ability to perceive and understand in the sense offree will of human beings. It doesn’t have the ability to recognize andunderstand in the sense of free will. What it does is simply to match data todata.

Thus, the “same case” itself is the resultof a value judgment (legal similarity). This judgment can only be made by ahuman judge, and cannot be left to a machine that does not have the ability tomake value judgments. Of course, this judgment may not be completely objective,or reach consensus on every occasion, because different judges will quite oftendisagree on what the meaning and purpose of the same legal text is. But even ifthere is room for personal value judgment, so that different judges on the sametwo cases made different judgments, does not affect the principle of “likecases be treated alike” itself. Because then usually support the same case ofthe judge will advocate the same judgment, and deny the same case of the judgewill advocate the different judgment, their disagreement lies only in the “requirementsof the law” in the end is what. It can even be sartificial intelligenced thatto retartificial intelligencen a certartificial intelligencen margin of differenceis the premise of judicial innovation. However, the historical average judgmentbased on big data mining will be unconsciously equated with the “optimaljudgment”, which objectively and potentially creates a kind of pressure forjudges to move closer to it. That is to say, this implied deduction: (1)previous cases are so ruled; (2) therefore, this kind of decision is the best;(3) therefore, the judge of the pending case should also be so ruled. Clearly,the fallacy of deriving “should” from “is” is being committed here. Excessiveconvergence on averages will fundamentally limit the “creative evolution” ofjudicial scenarios based on value changes or conceptual adjustments,eliminating the space for judicial innovation.


3.2 Does “like cases be treated alike” meanjudicial justice?

Even ifthe judicial artificial intelligence can realize “like cases be treated alike”,then does “like cases be treated alike” mean justice? Not necessarily so. “likecases be treated alike” is according to the law of adjudication of thederivative obligation, if we will according to the law of adjudication of the “law”is understood as a general rule, and “general” means “the same treatment forthe same situation”. If we understand “law” as a general rule, and “general”means “the same treatment for the same situation”, then “adjudication inaccordance with the general rule” contartificial intelligencens the requirementof “like cases be treated alike.This is justice in the administration of thelaw, not justice of the law. ” At the same time, “those who are equal are equaland those who are unequal are unequal”, or “the same treatment for the samesituation and different treatment for different situations” also impliesequality, or formal justice. So, as Hart puts it: “one essential element of theconcept of justice is the principle of treating like cases alike.” Thus,adjudicating according to law (which encompasses “like cases be treated alike”as its derivative duty) is the manifestation of formal justice, which is theminimum requirement of justice (in the sense of substantive justice). And thereason why we need to put forward “like cases be treated alike” in addition to judgmentaccording to the law is that “like cases be treated alike” has a symbolic value“overflowing” beyond judgment according to the law, which is a manifestation ofthe visualization and predictability of formal justice. In other words, it is asymbol of the value of justice.

However, “Symbolic value” or “value symbol”do not equal to values themselves. Apart from social effects such as “visualization”and “manifestation”, “like cases be treated alike” does not have a unique valuestatus in judicial justice, it is still only a component of formal justice.Therefore, “like cases be treated alike” does not represent the whole ofjudicial justice. This includes two situations

In one case, “like cases be treated alike”may be in conflict with the requirement to “judgment according to the law”. “likecases be treated alike” is a derivative duty of, but not the same as judgmentaccording to the law. There are two possible reasons for such a separation: oneis that the judgment in the past was wrong, that is, it was not decided inaccordance with the rules of law in force at the time. In this case, thejudgment of the first case is the result of illegal decision, but has the forceof res judicata. Then“like cases be treated alike” conflicts with judgmentaccording to the law. Undoubtedly, At this point, it is time to get rid of therequirement to adjudicate on the basis of precedents and simply adjudicatedifferently according to the law. “like cases be treated alike” is the  requirement of formal justice, and so isjudging according to the law (Thus there is no inquiry here as to whether thecontent of the law itself is reasonable and justified). When there is a tensionbetween the two requirements of formal justice, judgment according to the lawis preferable to “like cases be treated alike”. This is because it is judgmentaccording to the law, not “like cases be treated alike” , that constitutes theconstitutive duty of justice. As a constitutive duty, judgment according to thelaw is indispensable for judicial adjudication and is a necessary condition forthe latter. Once judgment according to the law is abandoned, judicialadjudication is not “justice” activities any more. Judging according to law isthe common and general nature of judicial adjudication, and “like cases betreated alike” is only a faceted or figurative presentation of judicial adjudication.Secondly, the rule of law based on the decision of the precedent has beenabolished, or even not abolished, but conflicting with the later enacted rulesof law of the same level or higher level. The situation here is that the past judgmentwas indeed made in accordance with the law in force at the time, and did notviolate the requirements of “judging according to the law”, and is right inthis sense. But due to subsequent changes in the law, the precedent eitherdirectly lost the basis of the judgment, or indirectly lost the basis of the judgmentaccording to the “new law is better than the old one” “superior law is betterthan the inferior one” criterion. The latter case must not be decided inaccordance with the former case, but must be decided on the basis of the new judgment,so as to realize the requirements of judging according to law.

In another situation, “like cases betreated alike” may be in conflict with the requirement of “justice on acase-by-case basis”. In this case, “like cases be treated alike” does notcontradict the requirement of judging according to law, but it can produceunjust, even grossly unjust consequences in pending cases. This is because theessence of the rule lies in a “solid generalization”. It is the product of anabstraction of the typical situation, and it can only be required of a personor thing by a common standard. And in the process of abstraction, it willignore or omit a lot of individualized detartificial intelligencels. Andwhether belongs to the legal “like cases”, according to the rules (constituentelements) of the general criteria, rather than specific detartificial intelligencelsto determine. So it may happen that although the pending case and the priorcase fully meet the legal “like cases” criteria, that is, subject to the samelegal rules, but because of the pending case has additional detartificialintelligenceled features, and this detartificial intelligenceled featuresprecisely require the pending case to be treated in a special way. If therequirement of “like cases be treated alike” were to be imposed, substantivejustice in some cases would be sacrificed to technical manipulation. However,since the ultimate purpose of “like cases be treated alike” is to establish thevalue of judicial justice to society, it is necessary to take into account theconcept of social justice.

When a conflict between the requirement of “likecases be treated alike” and the requirement of justice in individual cases isinevitable, judges are faced with a choice between applying the rule of lawdirectly without regarding the consequences of individual cases, or applyingthe principle in pursuit of justice in individual cases, thereby creating anexception to the rules. This involves a trade-off. This suggests that, althoughimportant, “like cases be treated alike” is not a final judicial duty ofjudges, nor is it overridable. Of course, judging according to the law and“like cases be treated alike” still have an initial priority over case-by-casejustice, which is determined by the role of the judge in the legal system. Butafter all, it cannot be denied that at some point judges have stronger reasonsto create exceptions to the rule and to realize case-by-case justice.Of course,when to deviate from the rules to achieve case-by-case justice cannot bepredetermined at the level of legal philosophy or general legal doctrine. Thisis because, as mentioned before, unlike generalized rules, the specificrequirements of case-by-case justice, which incorporates a social justiceperspective, vary from case to case and require full individualizedconsideration and substantive argumentation by the judge. This is precisely theshortcoming of machine algorithms. Because judicial adjudication is not amechanical activity, it is a virtuous cause, responsible for the attractivenessof the rule of law. And while the law does not always lead to just results in individualcases, judges are not obliged to act in accordance with the law in all cases.We must not cling to the possible bias of human differences alone, whileignoring the rigidity and coldness behind the uniform code.

To sum up, firstly, judicial artificialintelligence helps to realize judicial unity, but judicial unity does notnecessarily mean “like cases be treated alike”, and thus may not be able torealize the formal justice in the sense of “like cases be treated alike”;secondly, even if the judicial artificial intelligence is able to realize “likecases be treated alike”, because “like cases be treated alike” is only one ofthe aspects of the judging according to the law, it may be contrary to the judgingaccording to the law, and thus not be able to realize judging according to thelaw as the higher requirement. Formal justice in the sense that it may also becontradictory to the justice of individual cases, thus fartificial intelligencelingto realize substantive justice. Therefore, judicial artificial intelligence maynot only fartificial intelligencel to realize substantive justice, but may alsodeviate from it.


4. Personality prediction: strategism versus judicial justice


Individualized predictions are predictionsbased on historical data on the precedents of individual courts or judges. Suchpredictions are based not on historical data on the precedents, but on theindividualized historical trajectory of judgments by the court or judge whomade judgments of the similar cases. Unlike the derivation of same-casepredictions, personality predictions are not derived from allomorphicpropositions (“all prior class cases were decided this way”) to monomorphicpropositions (“pending cases should be decided this way as well”), but ratherfrom monomorphic propositions ( “Court A/Judge a has ruled this way in priorclass cases”) to the derivation of the monomorphemic proposition (“(Havingchosen Court A/Judge a,) Court A/Judge a would have ruled this way in thepending case as well”). Ignoring factors such as unpredictable surprises, thisderivation is not in itself a logical fallacy, but it would encourage anoutright tactical behavior that risks serious violations of judicial ethics.Because it follows the practical logic on the one hand, and the logic ofcommercial preference on the other, both of which may conflict with judicial justice.


4.1 The realist logic of “judge profiles”

Judicial artificial intelligence in the bigdata era not only calculates on the basis of historical data, but also “predicts”the future behavior of judges, because its core element lies in theconstruction of an algorithmic model that can realize the prediction ofjudgments. In essence, this machine algorithm is based on the historical dataof a particular judge’s judgments and evaluates, analyzes, compares, orpredicts based on judge’s identity. The personality prediction of the judge isbuilt on the basis of two kinds of analysis, one is the consistency analysis,that is, the specific judge case data and other judges handle similar cases ofbig data comparison, analysis of the specific judge specific cases and thewhole judicial system consistency status; two is the continuity analysis, thatis, by the specific judge specific cases in the case of the history of thesimilar cases of the judge in the case of the comparison, analysis of the judge’sdecision standard whether the continuity. The second is continuity analysis,which analyzes the continuity of a judge’s standard of judgment by comparingthe particular judge’s particular cases with his or her historical similarcases. Therefore, the basic premise of personality prediction is that judges’future adjudicative behavior will be consistent with their past adjudicativebehavior.

If a person’s character and tendency can beshown through his or her (regular) behavior, then judicial artificialintelligence is to portray the character and tendency of judges through their(regular) adjudicative behavior,that is to say, to “partificial intelligencenta picture” of the judge. This “profile” is used as the basis for modeling analgorithmic system for individual judges to predict their behavior in similarcases in the future. Even the judges themselves may not be aware of their own “profile”,because the algorithms in the age of artificial intelligence may knowthemselves better than the actors themselves. Especially in the case of judgment-makingdocuments fully open online, the personality prediction supported by judicialbig data will be greatly enhanced. Personality prediction centered aroundjudges’ personal data has gone much more radically than same-case prediction.Because it has completely set aside the rules and factual characteristics ofthe case (around the determination of the “like cases”), and shifted to theindividual judge who made the decision in the first case. So for thisprediction, the rules of law and the typical features of the case are not primary,but the person who makes the decision is primary. And people areindividualized, the person who makes the decision is different, and theconclusion of the decision may be different, even if it is a similar case. Thisshifts the focus of judicial artificial intelligence from the laws of the caseto the regularity of the person (the regular trajectory of the individualself). This is an entirely realist logic that denies adjudication as thepractice of rules, alluding to Holmes’s famous observation that “the law is,precisely, a prediction of the actual moves that will be made by the courts.”The difference is simply the broadening of “courts” to include “judges”.

This strategic and opportunistic attitudeis not concerned with the impartiality of justice, but with the ability to usepredictions of judges’ decisions for self-interest, not with reasons andarguments but with preferred outcomes. Personality predictions fundamentallychallenge the notion of judicial justicey: judgments should be both a form of “visiblejustice” and a form of “articulated justice”. The former is procedural justice,the latter is substantive justice. Procedural justice will be discussed in thenext part of the discussion, here martificial intelligencenly concerned withsubstantive justice. It should be seen that the substantive sense of justice isnot only a result of the meaning of formal or substantive justice, it is alsonecessarily related to the nature of the judgment. Judgment in the nature of akind of reasoning to resolve disputes. It must tell the parties and the publicnot only what judgment the court has given with respect to a particulardispute, but also why that judgment has been given. Even if discretion isunavoidable, the judge must give reasons for the judgment. And to give reasonsis to engage in reasoning or argumentation. Legal reasoning in judgments isciting normative and factual reasons to support the specific judgment that isultimately reached. A judgment is the vehicle for reasoning or deduction. Aneffective judgment must be based on sufficient legal grounds and factualreasons, and show the process of deducing from the law and facts to theconclusion of the judgment in a logical and sensible way. In short, justice isjustice based on reason and argument. This kind of justice presupposes aparticipant’s perspective, because the question of whether justice is just orunjust is only relevant to participants in the activity of judicialargumentation, or to those who care about what the right decision is under thelegal system. On the contrary, personality prediction presupposes the positionof the observer, who cares only about what the judge has done or will do, butnot about the correctness of the decision, or whether it is just or unjust.Therefore, the logic of realism based on the “ judge profiles” and judicialjustice are like horses on two runways, running counter to each other.


4.2 The business preference logic of “buyingand seling judges”

If the challenge of personality predictionto substantive justice is only a possible, but not an inevitable, challenge(because the judgment rendered by the predicted judge may be substantivelyjust), and therefore a relative challenge, then its challenge to proceduraljustice is a necessary and absolute challenge. In reality, it is not the courtsand legal researchers who have the greatest incentive to conduct “judge profiling”,but rather the technology companies that provide legal services for a fee. Forexample, in 2016, France published the Law for the Digital Republic,which requires that all court decisions be made avartificial intelligencelableto the public free of charge on the basis of respect for the privacy of thepersons concerned and an assessment of the risk of re-identification. Once thelaw was enacted, big data analytics on judgments grew rapidly within France. Anumber of French technology companies use big data and artificial intelligenceto “profile”, count, and rank judges to predict the likelihood of success inlitigation, the amount of damages that may be awarded for infringement ofrights, and even to help parties choose more “generous” judges in martificialintelligencentenance disputes. “The logic here is purely commercial. The logichere is purely a logic of commercial preference: customers like what kind ofjudge, you can pay for their “services”. In essence, it is no different from analgorithmic recommendation system for purchasing goods or other services. Thiscommercial logic will seriously erode the concept of procedural justice.

On the one hand, the core of proceduraljustice is that the principles of procedural autonomy and equal treatment ofthe parties will be infringed. As far as equal treatment of parties isconcerned, personality predictions may trigger two unjust situations: (1) theapplication of big data ptofiles of judges may trigger acts of jurisdictionpeddling. In judicial practice, some judges want to hear more cases fordifferent motives, including reputation or local interests. When plartificialintelligencentiffs have a wide choice of courts, these judges have an incentiveto make the law more favorable to plartificial intelligencentiffs, therebyattracting more plartificial intelligencentiffs to sue. At this point, thejudge is equivalent to a seller in a buyer’s market, and the judgment pursuesnot legal justice and social justice, not a reasonable distribution of rightsand obligations, but rather appeals to the preferences of the advantaged buyer(the plartificial intelligencentiff). Thus, the application of judicial bigdata analytics makes it possible for judges to consciously tilt towards thepublic’s preferences in order to shape a good record, so as to secure morecases in litigation cases with competing jurisdictions, thus creating an undueinducement for judges to make decisions. This is a new form of judicialcorruption because judges are profiting from big data profiles. (2) The use of bigdata profiles of judges may exacerbate strategic jurisdictional selectionbehavior, that is, “forum shopping” and “judge shopping” litigationspeculation. “Jurisdiction shopping” refers to the behavior of partiesconsciously choosing to litigate in a specific court in order to obtartificialintelligencen a judgment in their favor. With the popularization of big datatechnology and the wide application of big data profiles of judges, the costand difficulty of “forum shopping” and “judge shopping” will dropsignificantly. A small-scale “litigation strategy” could potentially turn intoa generalized “litigation speculation”, thus affecting judicial justice. Incommercial logic, those who are willing to pay a higher price will get a betterservice (“the highest bidder wins”). Therefore, parties who can afford to buy a“judge’s profiles” from a commercial company, or who are willing to pay ahigher price for a “judge profiles” that is favorable to them, will have anoverwhelming chance of winning the case than those who do not have the abilityto buy, or whose ability to buy is relatively weak. Both “jurisdictionaltouting” and “judge shopping” result in parties being denied equal access tojustice, albeit in different forms: “jurisdictional touting” is the practice ofJurisdictional touting” is ’selling judges’, which is done intentionally by thejudge or with the complicity of the judge and the parties, and is thereforejudicial corruption; whereas ’judge shopping’ is ’buying judges’, which is donewith the complicity of commercial companies and the parties, and does notrequire the judge to be aware of the fact that the judge is ’buying judges’.Conspiracy, does not require the judge is aware of the existence of the “selectionof judges” behavior. However, the logic behind them is the logic of commercialpreference, a type of judicial adjudication regarded as a buying and sellingactivity.

As far as procedural autonomy is concerned,the application of big data profiles of judges undoubtedly interferes with theindependence of the judiciary and the autonomous unfolding of the process of judgments.The success of individual predictions already means that the conclusions are nolonger the product of the end of the proceedings, not the result ofargumentation and reasoning, but are determined from the outset. Justice hasbeen “manipulated”——by the parties who have the ability to pay, by the commercialcompanies that want to profit from it, by the judges themselves who want to obtartificialintelligencen more resources, or, more precisely, by the omnipresent commerciallogic of modern society. As the scope of judicial penetration by artificialintelligence technology expands, the entire process of justice may be broughtunder the lens of technological governance, with no limits and no place tohide. Accordingly, judicial independence and a degree of autonomy will becomemore and more of a myth.

On the other hand, the principle that judgesshoule be appointed by law would also be infringed. The origins of thisprinciple can be traced back to the French Constitution of 1791. ChapterV, article 4 of that Constitution states: “No citizen shall be exempted frombeing tried by a judge appointed by law by any ad hoc tribunal, or by means ofcompetence and referral other than those provided for by law.” This principlewas subsequently adopted in Germany, where article 105 of the WeimarConstitution and article 101 of the German Basic Law stipulate thatno special courts may be established and that no one may be deprived of theright to be tried by a statutory judge. The term “statutory judge” refers to ajudge to whom the legal dispute is assigned in accordance with the statutoryjurisdictional provisions, as well as in accordance with a general plan ofallocation of business, usually made in advance within the court havingjurisdiction in the matter. The principle of the statutory judge ensures theindependence and neutrality of the judge. It is generally recognized that thestatutory judge principle contartificial intelligencens four elements, namely,the prohibition of special courts, the statutory jurisdiction of the court, thestatutory assignment of cases, and the natural invalidity of any assignment ofcases in violation of the statutory procedure. Among them, the courtjurisdiction statute is the core of the statutory judge principle. It meansthat the specific judges to whom a case is to be assigned must be specified inadvance by general and abstract law. The question of which cases are to be heardby which judges is a matter of case allocation in the courts. In developedcountries governed by the rule of law, the allocation of cases to the courts isbasically determined by a predetermined and transparent procedure. As for theallocation of specific litigation cases, most courts, in principle, adopt theprinciples of “random allocation” and “equal allocation”. The function of theprinciple that judges shoule be appointed by law is, on the one hand, tosafeguard the fundamental rights of the parties, especially the right to a fartificialintelligencer trial; on the other hand, it is to ensure the independence of thejudges and to prevent external forces from interfering in the administration ofjustice. This is in fact the requirement of procedural autonomy and equaltreatment of the parties. Therefore, the principle of a quorum of judgesguarantees procedural justice from the perspective of the case allocationmechanism.

It is clear, however, that the act of “buyingand selling judges” will inevitably erode the principle that judges shoule be appointedby law, infringe upon the right to a fartificial intelligencer trial of thevulnerable parties, and affect the independence of the judiciary. Becauseaccording to the principle that judges shoule be appointed by law, the rulesfor the creation of the referee (the rules of the case allocation process)should precede the dispute, and the referee of individual cases should begenerated by the rules determined by the law in advance. However, the basicidea of both “jurisdiction peddling” and “judge selection” is to “manipulatethe outcome of the trial by manipulating who will try the case,” or even toinfluence the outcome of the trial by changing the judges. The basic idea is to“manipulate the outcome of the trial by manipulating who will try the case”, oreven to influence the outcome of the trial by changing who will try the case.Once a statutory trial judge is deprived of the right to adjudicate aparticular case due to human interference, while other judges are given theright to adjudicate the case due to human factors, the independence of bothwill be affected by the intervention of external factors, and procedural fartificialintelligencerness will not be ensured.

To summarize, firstly, the realist logic of“judge profiles” implies a strategic and opportunistic attitude that is notconcerned with substantive justice and may therefore run counter to justice;secondly, the logic of business commercial preference of “buying and selling judges”inevitably erodesthe idea of procedural justice ,that is, the principle ofprocedural autonomy and equal treatment of the parties, as well as theprinciple that judges shoule be appointed by law.


5. Conclusion


The basic operation logic of judicial artifcialinteligence is prediction based on historical data. It is true that the law orjudicial decisions should be predictable, but predictability means only thatthey should be based on general rules of law that have been publicized inadvance. It does not mean that the decision made by a court or judge isactually predicted by a citizen or a member of the public, either accidentallyby themselves or regularly with the help of artificial intelligence oralgorithmic systems. Thus, technology-based prediction of judicial decisions isnot the same as predictability of judicial decisions; the former relates to defacto predictive capacity, while the latter relates to de jure predictability.This also suggests that judicial technology can never completely replacejudicial judgment, especially value judgment in justice. Because judicialadjudication is a kind of inherent value activities, in this kind ofactivities, “human” logic can not be “machine” logic replaced. This is therequirement of the dignity of justice, but also the requirement of humandignity.

The appropriate positioning of judicialartificial intelligence at present is that it is an auxiliary means of judicialadjudication activities, martificial intelligencenly artificial intelligencemedat improving the efficiency of judicial trials. As put forward in the Opinionsof the Supreme People's Court on Accelerating the Construction of Smart Courts,the goal of the construction of smart courts is to explore the establishment ofa knowledge map for filing, trial, adjudication, execution, and other courtbusiness, and to build an artificial intelligence perception and interactionsystem for all types of users and a knowledge-centered “artificialintelligence-assisted decision-making system”. This is because, from thedisciplinary attributes, judicial artificial intelligence belongs to legalinformatics, to be precise, belongs to decision-making legal informatics, martificialintelligencenly involving legal expert system, decision-making assistancesoftware and legal consulting software. And legal informatics is essentiallythe application of informatics in the field of law, China’s smart court,intelligent prosecution, intelligent public security projects related tojudicial artificial intelligence belong to this. Although most of the currentscholars do not deny the judicial artificial intelligence of thisdecision-making assistance status, but the author does have a concern: just astechnology is never just technology, and at the same time, also loaded with aspecific value pursuit or ideology, the excessive promotion of judicialartificial intelligence and obsession will also be unknowingly produce a kindof orientation, that is, potentially technocratic logic overrides the humanlogic of the orientation. For this reason, it is important to always keep inmind that technology will always be just technology, and the application ofjudicial artificial intelligence can never replace the pursuit of justice. “Justicein robes” is ultimately human justice, not machine justice.


The article is originally published in Journalof Political Science and Law, No. 4, 2022, reposted from WeChat fficialaccountournal of Political Science and Law.