[author]Hong Tao
[content]
Challenges and Reshaping of AI-assisted Sentencing from the Perspective of Due Process
Hong Tao
Ph.D. Candidate at Guanghua Law School, Zhejiang University
Guanghua Law School, Zhejiang University
Abstract: AI-assisted sentencing is a key project in the construction of intelligent justice, but from the perspective of due process, it has been found that its application poses multiple challenges, e.g., the principle of procedural participation being hollowed out by automated decision making, and discriminatory decisions violating the principle of procedural neutrality, etc. In fact, these challenges don’t merely stem from the uncertainty of artificial intelligence, but the deeper reason is that the traditional theory of procedural justice has not included the digital space into the litigation field, ignoring the regulation of technology developers and the limitation of procedural openness not advancing to the era of explainability. Under the category of procedural justice, the technical procedural justice and traditional procedural justice are both based on the theory of dignity and judicial credibility, and the former can supplement the latter’s era limitations in terms of applicable fields, regulatory objects, constituent elements, etc., so “supplementarity” should be introduced. The theory under the judicial sentencing scenarios includes the first layer of the “people-oriented” principle, and the second layer of the compliance obligations of the sentencing system and the procedural rights of stakeholders. Under this guidance, the procedural selection system should be added, the algorithm disclosure system should be clarified, the identification system and expert assistance system should be improved, and the intelligent accountability system should be established. Furthermore, the cooperative research and development mechanism, algorithm filing mechanism, algorithm hearing mechanism and judicial training mechanism should be expanded as linkage.
Key words: due process, artificial intelligence(AI), assisted sentencing, technical procedural justice
With the widespread application of digital technologies such as big data and artificial intelligence, the field of criminal justice has undergone a digital transformation. Since sentencing activities directly involve the restriction of the defendant’s personal rights and liberty, and even the deprivation of the right to life, AI-assisted sentencing has become a key project. Faced with the risks and challenges of artificial intelligence’s intervention in judicial sentencing, the academic community has engaged in discussions. From the perspective of the research trajectory, there are roughly three stages: The first stage concerns the feasibility and legitimacy of applying artificial intelligence to judicial sentencing. Some scholars have pointed out that judicial decisions emphasize the consistency of “emotion, reason, and law,” yet artificial intelligence cannot simulate non-logical human thinking, making it difficult to provide reasoning for judgments. Others believe that by learning from judges’ collective sentencing experiences in existing cases, artificial intelligence can achieve scientific sentencing predictions. With the practical feedback from intelligent sentencing and the deepening of research, the view that artificial intelligence can be used for sentencing has become a consensus, and the first stage of research came to an end. The second stage focuses on the impact of artificial intelligence on the status of judges as the main body of adjudication. Some scholars argue that the characteristics of data precedence and algorithmic dependence can easily form a “data-ism judicial perspective,” leading to judges becoming calculable, predictable, and controllable objects. However, most scholars believe that current artificial intelligence is weak AI, which is incapable of handling judicial work that requires extensive knowledge and high technical content, and can only play an auxiliary role. Although the conclusion that artificial intelligence will not replace judges has become the mainstream academic view, the practical alienation caused by technological dependence—arising from the knowledge gap and the algorithmic black box—means that the research of the second stage is still ongoing. The third stage addresses the challenges and governance of artificial intelligence in relation to due process. Some scholars have realized that artificial intelligence not only impacts judges but also challenges due process itself, threatening the legitimate rights and interests of all participants in the procedure (especially the defendants). At present, some research, starting from the field of criminal justice, points out that artificial intelligence shocks due process and advocates the construction of a theory of procedural justice for the digital era. Although such work is theoretically enlightening, it fails to provide targeted theoretical supply for AI-assisted sentencing because it is not grounded in the sentencing context. Some research is based on the sentencing scenario but adopts an integrated perspective that simultaneously addresses substantive and procedural issues, which results in insufficient depth in the discussion of procedural aspects. The latest research, based on an analysis of the risks of AI-assisted sentencing, has attempted to transform the decision-making procedure into a litigation-oriented one by introducing and updating the theory of technical procedural justice. Unfortunately, on the one hand, it does not distinguish between the due process challenges of AI-assisted sentencing and the theoretical causes of these challenges, instead classifying both under “risks,” which makes the analysis of challenges insufficiently systematic and the causes of problems insufficiently prominent. On the other hand, the proposed governance measures mainly focus on the trial stage, with relatively thin content, and fail to recognize that AI-assisted sentencing runs through the entire litigation process, lacking the mechanism design of “spillover effects.” In light of this, the author will first systematically analyze the due process challenges of AI-assisted sentencing and further point out the causal relationship between these challenges and traditional theories of procedural justice. Second, while demonstrating the “supplementary” introduction of technical procedural justice theory, the author will provide a contextualized interpretation to offer targeted theoretical supply. Finally, under the guidance of this new theory, the author will reshape the system and expand the mechanisms of AI-assisted sentencing, with the hope of contributing to the reform of sentencing standardization and the realization of digital justice.
1. Multiple Challenges of AI-assisted Sentencing to Due Process
Regarding the challenges posed by AI-assisted sentencing to due process, it is first necessary to analyze the two concepts of “AI-assisted sentencing” and “due process.” Generally speaking, AI-assisted sentencing refers to the use of big data, cloud computing, and graph structures to construct a legal knowledge graph. On this basis, it employs natural semantic technologies for case similarity recognition and fact extraction, and is trained through human feedback learning algorithms, ultimately achieving sentencing prediction and deviation testing. Due process refers to procedures that possess a certain degree of reasonableness and legitimacy. According to evaluative standards, it can be divided into procedural instrumentalism and procedural intrinsicism: the former holds that the value goal of due process lies primarily in the realization of substantive justice, while the latter argues that the evaluative standard of due process depends on whether the procedure itself possesses certain intrinsic qualities. With the advancement of the rule of law, respect for the parties’ subject status has become the core consideration, and evaluative standards have increasingly inclined toward independent intrinsic qualities. Therefore, the judgment on the challenges of AI-assisted sentencing to due process should be demonstrated from the perspective of compliance with these intrinsic procedural qualities.
1.1 Automated Decision-making Undermines the Principle of Procedural Participation
The principle of procedural participation is the primary element of procedural fairness. This principle holds that subjects who may be directly affected by criminal adjudication or the final outcome of litigation (hereinafter referred to as “stakeholders”) should have full and meaningful opportunities to participate in the process of decision-making and exert effective influence on the outcome. In understanding this principle, one may refer to Habermas’s “theory of communicative action,” namely, that communicative rationality between subjects depends on verbal and behavioral interactions, whereby communication and consensus are reached on the basis of mutual recognition of each other’s subject status. In short, the stakeholders’ constant “presence” and meaningful participation are inseparable from face-to-face dialogue and debate with the opposing party and adjudicators, allowing them to present opinions and express arguments instantly through language and sensory engagement. However, these designs lose their intended value in the face of AI-assisted sentencing and fail to guarantee the dignity of stakeholders. Specifically, although stakeholders may be “present” in the physical space during AI-assisted sentencing, the decision-making process of intelligent sentencing takes place in the digital space and is instantly and heterogeneously completed. Restricted by informational barriers, stakeholders are unable to participate in the decision-making process and are, in effect, “absent.” Moreover, the case samples, algorithmic models, and weight evaluations on which AI-assisted sentencing is based are often shrouded in the “black box” and carry professional attributes, leaving stakeholders unable to conduct inquiries due to the knowledge gap, let alone exert effective influence on the judgment result. In sum, under AI-assisted sentencing, the procedural participation value of verbal and sensory communicative actions between stakeholders, opposing parties, and adjudicators in physical space is severely diminished, making it difficult to influence the judgment outcome through “process control.” Even more concerning is that the widespread application of technologically rational artificial intelligence may unconsciously lead people to revere and accept its conclusions, thereby reducing themselves to mere bystanders or appendages of intelligent justice.
1.2 Discriminatory Decisions Violate the Principle of Procedural Neutrality
The principle of procedural neutrality is the cornerstone of due process. It requires that adjudicators maintain a detached and impartial position between the parties to litigation, ensuring that each side receives equal treatment and respect in litigation activities. The mainstream academic view holds that artificial intelligence plays only an auxiliary role in judicial sentencing and does not fundamentally challenge values such as judicial fairness and neutrality. This assertion mainly derives from theoretical analysis and logical reasoning. In practice, however, AI-assisted sentencing challenges the principle of procedural neutrality in at least two respects. First, the direct adoption of intelligent sentencing recommendations that embody a “prosecution tendency” results in the fusion of prosecutorial and adjudicative functions. Under the background of digital prosecution, procuratorial organs increasingly rely on internal intelligent systems. Yet such systems are trained on past prosecutorial cases, and prosecutorial stances are unconsciously embedded during algorithmic coding, which makes intelligent sentencing recommendations “prosecution-oriented.” In practice, even when judges doubt the reasonableness of these recommendations, they usually choose to trust them due to a lack of professional knowledge and the strong sense of logical rigor that comes with technical analysis. The direct adoption of prosecution-oriented recommendations renders the separation of prosecution and adjudication an empty slogan. Second, sentencing systems within the construction of “smart courts” inevitably produce discrimination due to data and algorithms, which, if not promptly detected by judges, undermines procedural neutrality. On the one hand, judicial cases used for training sentencing systems are prone to inherent sample biases, and the expansive nature of big data under the principle of “garbage in, garbage out” easily results in discriminatory outcomes. On the other hand, when technical personnel select algorithmic variables, social culture, value preferences, and other factors unavoidably introduce “pre-existing biases,” leading to outcome deviations. For example, the COMPAS system used in U.S. courts incorporated variables such as gender and race to assess the probability of reoffending. This resulted in African-American defendants being assessed with significantly higher recidivism risks than white defendants and often receiving harsher sentences.
1.3 Black-box Operations Undermine the Principle of Procedural Rationality
The principle of procedural rationality is an indispensable element of due process. It requires that the procedures through which adjudicators render judgments must adhere to a certain rationality, ensuring that their reasoning and conclusions are based on reliable and clear understandings. As discussed above, AI-assisted sentencing relies on “judicial big data” and “sentencing algorithmic decision-making,” operating under the paradigm of “centralized data processing—intelligent opinion generation.” The construction of legal knowledge graphs, case similarity recognition, and fact extraction constitute centralized data processing; model training mines correlations between sentencing data and sentencing patterns, thereby generating opinions used for sentencing prediction and deviation testing. In practice, however, these processes raise many concerns due to the black-box effect, which to some extent violates the principle of procedural rationality. First, the sources of training data for sentencing systems lack transparency, including but not limited to geographic coverage, case types, and time span. Moreover, the construction pathways of legal knowledge graphs are not made clear, and case recognition and fact extraction cannot guarantee completeness, casting doubt on the rationality of the centralized data processing stage and contradicting the requirements of procedural rationality. Second, sentencing systems lack algorithmic transparency and interpretability. Not only adjudicators and litigants but even technical developers themselves may have limited understanding of their principles. In the U.S. case of State v. Loomis, the defendant appealed to the Wisconsin Supreme Court, arguing that the court’s overreliance on the COMPAS system violated his due process rights. The relevant algorithms and data were withheld as trade secrets, preventing effective inquiry. Foreign researchers have pointed out that the algorithms of COMPAS and similar sentencing systems exist in a black-box state, with internal structures difficult to interpret. As a result, criminal defendants and their defense lawyers cannot raise challenges against sentencing outcomes, seriously eroding the concept of procedural justice.
1.4 Monopoly of Public Power Impacts the Principle of Procedural Equality
Due process is primarily oriented toward people in the abstract sense, assuming formal equality under the premise of “a level playing field,” while neglecting the disparities in litigation capacity caused by differences in wealth, status, and other factors. Therefore, the principle of procedural equality, with “substantive equality” at its core, was proposed. It advocates that the stronger party in litigation should bear certain special obligations, while the weaker party should be granted necessary privileges to achieve “equal arming.” At present, the development of sentencing systems exhibits a monopoly of public power, as they are mainly developed or procured by judicial organs such as procuratorates and courts, with insufficient participation from social forces such as lawyer associations, law schools, and the general public. This monopoly exacerbates the imbalance between prosecution and defense, making the already elusive goal of “equal arming” even harder to attain. First, defendants, due to deficiencies in information collection and data analysis capabilities, are excluded from the use of sentencing systems, further deepening their cognitive barriers regarding data sources and algorithmic principles, leaving them unable to effectively question or defend. In contrast, procuratorial organs, whether in self-development or technical procurement, can select cases aligned with their value orientations as training data and embed their preferences in model training, thereby producing outputs more favorable to their side. Second, the sentencing opinions obtained by prosecution and defense through artificial intelligence are not treated equally in litigation activities, let alone inclined protection for the weaker party. Sentencing recommendations obtained by people’s procuratorates through AI not only have legal authority but are also often directly adopted by courts as final rulings. By contrast, most defendants lack access to official sentencing systems in judicial practice. Even in the rare instances where private systems such as “Xiaobaogong (小包公)” or “Fa Bao Zhi Xing (法宝智刑)” provide favorable results, courts seldom accept them. In conclusion, although artificial intelligence is a powerful tool in litigation, it has not improved the disadvantaged position of defendants. On the contrary, it further strengthens the advantage of the prosecution, which is detrimental to the realization of procedural justice.
2. Causal Analysis: The Temporal Limitations of Traditional Procedural Justice Theory
At present, existing academic research has only recognized the unilateral challenges posed by artificial intelligence to due process, without delving into the fundamental causes behind them. The proposed solutions are mostly limited to the cautious application of technology (such as restricting its use to simple cases and leaving the final decision to judges) and targeted procedural optimizations (such as disclosing sentencing algorithms and providing sentencing remedies). This approach, on the one hand, prevents the full release of AI’s potential, and on the other hand, does little to ensure the coordination among various measures, thereby making systemic governance difficult to achieve. In fact, in the digital era, the limitations of traditional procedural justice theory have gradually become apparent, and it can no longer provide sufficient explanatory power.
2.1 Failure to Incorporate Digital Space into the Litigation Arena
Scholars both at home and abroad generally believe that procedural justice theory originates from the Anglo-American legal system, with its roots traceable to natural justice. Over the course of its evolution, human productive and social activities mainly took place in physical space, and litigation was no exception. Accordingly, the litigation arena envisaged by traditional procedural justice theory was also oriented toward physical space. For instance, litigants had the right to remain “present” throughout the adjudication process, and could submit evidence or raise arguments before the adjudicator. With the development of digital technology, however, humanity has created a digital space independent of the physical world, thereby making digital society a reality. Litigation activities within digital society have also undergone transformation—yet such transformation did not begin with artificial intelligence, but had already emerged earlier, as exemplified by the legal recognition of electronic data. Faced with challenges brought by new digital technologies, traditional procedural justice theory has attempted targeted optimizations, but it has not realized that its orientation toward physical space is too narrow to regulate litigation activities occurring in digital space. Specifically, because traditional procedural justice theory fails to incorporate digital space into the litigation arena, stakeholders in physical space lack legitimate grounds to intervene in activities within digital space. For example, they cannot know what kinds of data have been selected or what algorithms have been trained in sentencing systems. From the operational status of the Shanghai High People’s Court’s “Criminal Case Intelligent Auxiliary System” and the Guangzhou Intermediate People’s Court’s “Intelligent Auxiliary Sentencing Decision System,” it can be seen that parties have not been explicitly granted the right to review judicial big data of similar cases. The underlying reason is the absence of rules granting litigants rights in digital space. In other words, even if sentencing systems operate as black boxes, or embody the preexisting biases of judges or the prosecutorial tendencies of the prosecution, the algorithms—by virtue of being situated in digital space—effectively obtain a form of regulatory “immunity.” If these problems are not properly addressed, the principles of procedural participation and procedural rationality will naturally lose their efficacy in the face of AI-assisted sentencing.
2.2 Neglect of Regulation over Technological Developers
To enhance the acceptability of judicial decisions, traditional procedural justice theory imposes corresponding regulations on the behavior of different litigation actors so that litigation activities conform to judicial fairness. For instance, judges adjudicating cases are required to remain neutral, and defendants are granted the right not to incriminate themselves. Traditional procedural justice theory centers on regulating the prosecution, the defense, and the judiciary, while paying relatively little attention to other litigation participants such as witnesses, expert witnesses, or translators. However, in the context of AI-assisted sentencing, technology developers—including software engineers and system deployers—though not within the traditional categories of criminal litigation actors, exert substantive influence over the litigation process and outcomes through activities such as data collection and analysis, algorithm training, and model construction. They ought to be subject to due process requirements, but under the regulatory scope of traditional procedural justice theory, they remain beyond regulation—even when their actions challenge judicial fairness.
Specifically, the construction of sentencing systems requires not only substantial financial resources but also the expertise of technical specialists. At present, such systems are generally built under the model of “judicial demand orientation + technical implementation.” In this process, legal knowledge and technical knowledge shape distinct logics of power. When technology developers employ technical means to interpret and satisfy judicial needs, discursive conflicts and value deviations may arise, leading to sentencing results that contravene judicial justice. For instance, when training data, developers often refrain from making normative judgments about “right” or “wrong,” and may include wrongful cases or outdated precedents as samples, causing sentencing outcomes to deviate from legal standards. In short, as artificial intelligence intervenes in sentencing, technology developers become substantive participants influencing both the process and the results of litigation. Yet in practice, they remain outside the regulatory scope of traditional due process theory.
2.3 Failure of Procedural Transparency to Extend to Interpretability
Although procedural transparency is not usually treated as an independent principle of due process, this does not mean that due process does not require transparency. On the contrary, the very reason procedural transparency is not set forth as an independent principle is that it constitutes a foundational element of procedural fairness, permeating principles such as participation and neutrality—it is, in effect, the principle underlying other principles. Constrained by the times, procedural transparency under traditional procedural justice theory has taken “perceptibility of information” as its core. Yet in the context of AI-assisted sentencing, this perceptibility-centered transparency faces unprecedented challenges. First, judicial data training, algorithmic model construction, and automated decision-making processes all occur within digital space, beyond the direct perceptibility of physical space. Litigants must rely on technical means to pierce spatial barriers and obtain relevant information, thereby raising the standard for perceptibility. Second, sentencing algorithms are often imbued with value attributes, typically protected as intellectual property or trade secrets of developing enterprises, making them harder to disclose than ordinary information. Third, algorithms themselves are complex and highly technical; even if information regarding their principles, mechanisms, or source code were disclosed, neither the general public nor even judges could substantively understand them, let alone exercise participatory or dissenting rights. In other words, procedural transparency centered merely on perceptibility collapses in the face of artificial intelligence. The underlying reason is that “perceptibility of information” does not equate to “interpretability of information.” The former focuses only on visibility, whereas the latter emphasizes comprehensibility. For algorithms, interpretability is the key, while disclosure is merely a possible means toward that end—and not even a necessary one. Traditional procedural justice theory has not yet recognized the necessity of extending from perceptibility to interpretability. Confronted with the application of AI in criminal justice, the original theoretical framework has, to a large extent, become inadequate.
3. Feasible Pathways: The Attempted Introduction of Technological Due Process Theory
In the face of continuously developing criminal justice practices, traditional theories inevitably lag behind. However, this does not mean that we can abandon theory altogether and embrace “technological governance” uncritically. While traditional procedural justice theory does have its limitations, it has not lost its intrinsic value and can still serve as the “foundation” for new theories. Therefore, we should, based on national conditions and the specific contexts of criminal justice, actively promote the modernization of traditional procedural justice theory. Internationally, a theory known as technological due process, specifically designed for algorithmic automated decision-making, has been proposed. This theory offers a feasible pathway for the effective governance of AI-assisted sentencing.
3.1 The Legitimacy of “Supplementary” Introduction
Technological due process theory was first proposed by American scholar Danielle Citron. Its emergence is closely linked to the widespread use of automated decision-making in administrative fields since the 1950s. As automation technologies gradually penetrated areas such as eligibility screening, risk assessment, and government governance, concerns over “technocracy” and “algorithmic hegemony” spread. Citron pointed out that automation in administration dominated by algorithmic power posed challenges to elements of procedural justice such as participation and neutrality, making it necessary to reshape procedural mechanisms to empower administrative counterparts, stakeholders, and the public. In terms of content, technological due process theory requires automated systems to distinguish between “rules” and “standards,” ensure transparency and accountability, and generate and preserve audit trails. It further advocates granting individuals procedural rights such as the right to “receive effective notice” and “be heard.” Existing research has primarily refined and adjusted these elements, treating technological due process as a foundational theory for algorithmic governance and even digital governance. Within the field of criminal justice, some scholars have argued that traditional due process theory fails to explain judicial operations in the context of AI, and that it is therefore necessary to introduce technological due process theory to address procedural justice risks. Others contend that introducing technological due process theory as a supplement to traditional theory better explains and guides criminal justice in the digital age. In general, there is a growing consensus on the introduction of technological due process theory into criminal justice. However, the precise role the new theory should play and the justification for such an introduction remain underexplored.
In the author’s view, technological due process and traditional procedural justice share commonalities while also differing from each other. They are not mutually exclusive but complementary, with the former compensating for the latter’s temporal limitations. First, both belong to the broader category of procedural justice theory. Procedural justice has strong inclusiveness and flexibility, with its connotations evolving alongside societal changes. In short, procedural justice theory has both content inclusivity and application extensibility; as society transitions from the traditional to the digital era, it has correspondingly developed into “bloodline-related” traditional procedural justice and technological due process. Second, both traditional and technological procedural justice theories are rooted in the dignity theory and the theory of judicial credibility. They not only emphasize respect for human dignity as subjects but also seek to enhance trust and respect for judicial decisions by stakeholders and the public through due process. The principles stressed by traditional theory—such as procedural participation and neutrality—are ultimately aimed at respecting human dignity and enhancing the acceptability of judicial decisions. The same applies to technological due process: whether through demands for transparency and accountability in technology, or through the granting of procedural rights such as the right to effective notice and the right to be heard, all reflect an intrinsic value of “good,” namely, respect for human dignity.
At the same time, technological due process differs from traditional procedural justice in its fields of application, regulatory objects, and constituent elements, thereby providing a complementary role. In terms of application fields, traditional procedural justice theory is primarily oriented toward physical space and lacks mechanisms to address activities such as data collection and sentencing computation occurring in digital space. Technological due process, by contrast, is oriented toward digital space and focuses on the development and application of digital technologies such as AI. In terms of regulatory objects, traditional theory primarily addresses litigation parties, with rights, obligations, and liabilities built on this foundation. The advent of AI subjects human dignity to additional threats from data and algorithms, as well as challenges in accountability. Technological due process, however, incorporates data, algorithms, and the developers behind them into its regulatory scope, employing distributed moral responsibility to discipline human–machine interactions. It recognizes humans as responsibility-bearers while also analyzing the causal links between machines and legal consequences, thereby preventing responsibility from being shifted onto technology. In terms of constituent elements, traditional procedural justice overlooks the inevitable trend and challenges of digital technology’s intervention in justice. Technological due process, by contrast, encompasses both technical compliance empowerment and rights-based empowerment, thereby enabling the release of digital technology’s potential while ensuring effective governance. Of course, technological due process also has its own limitations and cannot entirely replace traditional procedural justice—for instance, it cannot be applied to litigation activities confined to physical space alone.
3.2 Interpretation of Elements under “Supplementary” Introduction
When a theory is applied to different contexts, its constituent elements inevitably require corresponding adjustments. The elements of technological due process theory discussed earlier were developed for automated administration, which differs fundamentally from judicial sentencing in terms of applicable objects and basic principles. Automated administration concerns administrative legal relations, while judicial sentencing concerns the imposition of criminal penalties. Thus, the “supplementary” introduction of technological due process must be tailored to the sentencing context through interpretive elaboration of its elements. At present, most research remains at the level of criminal justice broadly (without refinement to sentencing). For example, some studies argue that technological due process, as a standard for assessing the integration of digital technologies into criminal justice, includes five basic elements: the exclusion of bias, sufficient participation, procedural equality, procedural rationality, and effective accountability. Other scholars have focused specifically on sentencing but emphasized the value of “human-centeredness,” without elaborating on the elements of technological empowerment and rights-based empowerment. In the author’s view, providing targeted theoretical supply requires interpreting elements through the lens of a rule-of-law systems-engineering approach—analyzing both the internal relationships among elements and clarifying their substantive meanings.
In the sentencing context, technological due process can be structured at two levels. The first level is the value principle of “human-centeredness,” which serves as both the starting point and the ultimate goal of theoretical construction. The second level consists of the compliance obligations of sentencing systems and the procedural rights of stakeholders. The former aims to achieve normative empowerment by optimizing sentencing systems, while the latter seeks to ensure the subject status of stakeholders in human–machine interactions by enhancing their capacity for contestation. These two levels complement each other. Substantively, “human-centeredness” requires that AI always serve humanity, and that it be designed, developed, deployed, and used in a safe, trustworthy, and responsible manner. Within technological due process theory, “human-centeredness” governs both system compliance obligations and stakeholder procedural rights. Today, “human-centeredness” has already been recognized internationally as a governance principle, acquiring in some respects the effect of an international convention. Citron also identified “human-centeredness” as the orientation of technological due process, emphasizing that the theory is fundamentally aimed at safeguarding human dignity and preventing domination by machines.
First, compliance obligations of sentencing systems. To mitigate the risks to procedural justice caused by AI-assisted sentencing, sentencing systems should comply with the following requirements: ① Assistance, not domination. The internal logic of AI-empowered justice remains “human-led, machine-assisted.” Excessive reliance on AI risks judicial alienation; AI should serve as an assistant rather than a decision-maker. ② Transparency and interpretability. The black-box nature of sentencing systems creates difficulties for stakeholder participation and raises risks of algorithmic discrimination. Therefore, sentencing systems must meet the dual demands of transparency and interpretability—the former requiring disclosure of technical elements such as source code, input data, and output results, and the latter requiring reasonable explanation of these technical elements. ③ Neutrality and freedom from bias. Discriminatory factors embedded in training data or algorithmic models can affect final outputs. Thus, it is necessary to implement source control, standardize and audit data, and establish algorithmic impact assessments to ensure lifecycle oversight from development to application. ④ Accountability for harm. Legal liability serves as the gatekeeper of system compliance. Without liability, compliance obligations become meaningless. Sentencing systems must ensure accountability for harm, meaning that damage can be accurately traced, effectively controlled, and promptly remedied.
Second, procedural rights of stakeholders. The standardized operation of AI-assisted sentencing cannot rely solely on system compliance at the technical level; stakeholders must also be granted procedural rights to realize the “human-centeredness” principle. Specifically: ① Right to information. Because AI-assisted sentencing operates in digital space and often functions as a black box, stakeholders are unable to obtain timely information. They should therefore be granted the right to information, including whether and how the sentencing system is used, and how its results are handled. ② Right to full participation. Stakeholders currently lack effective channels for participation under AI-assisted sentencing. They should be granted the right to full participation, enabling substantive involvement and leadership in human–machine interaction through mechanisms such as algorithmic disclosure and algorithmic hearings, thereby preventing technological alienation. ③ Right to algorithmic explanation. Given the complexity and technicality of sentencing algorithms, stakeholders face a knowledge gap that prevents understanding and effective questioning. Although transparency measures may enhance access to information, they are insufficient. Stakeholders should be granted the right to algorithmic explanation, allowing them to challenge sentencing algorithms and require users to provide explanations. ④ Right to algorithmic remedy. Stakeholders should enjoy the right to algorithmic remedy, enabling them to seek redress from relevant authorities or request human substitution when AI-assisted sentencing causes or may cause rights-based harm.
4. Institutional Reshaping and Mechanism Expansion of AI-Assisted Sentencing
A review of the existing regulations reveals that the Opinions on Regulating Certain Issues of Sentencing Procedures issued by the “Two Supreme Courts and Three Ministries” have not yet responded to the issue of due process in the context of judicial sentencing assisted by artificial intelligence. Similarly, the Opinions on Regulating and Strengthening the Application of Artificial Intelligence in the Judiciary (hereinafter referred to as the Opinions on Intelligent Justice) issued by the Supreme People’s Court only stipulate basic principles such as safety, legality, fairness, and impartiality, without specifying the rights and obligations of the parties to litigation, the procedural steps, and other details. In this sense, the theory of technical procedural justice should serve as guidance to promptly improve relevant legislation and apply it to risk governance, striving to achieve a cycle of “theory guiding legislation—legislation regulating practice—practice feeding back into theory.” Since legislative improvement cannot be completed in a short period of time, priority should be given to institutional reshaping and mechanism expansion. The former, serving as a bridge between legal principles and legal rules, embodies the value orientation of legal principles while guiding the concrete content of legal rules. The latter, as an overflow design of legal institutions, can compensate for the deficiencies of institutions at the micro level due to their relative macro nature, and can expand avenues for problem-solving to support integrated governance.
4.1 Institutional Reshaping
As mentioned earlier, the theory of technical procedural justice in the context of judicial sentencing follows the principle of “human-centeredness,” covering both the compliance obligations of sentencing systems and the procedural rights of stakeholders. These two aspects should be integrated during institutional reshaping to jointly promote the reform of sentencing standardization and the realization of digital justice.
First, establish a system of procedural choice. While AI-assisted sentencing improves efficiency and accelerates consistency in similar cases, it also brings challenges to due process and threatens the status of the parties, whereas the foundation of litigation lies in judicial fairness. Therefore, the use of sentencing systems should be optional. To begin with, since the defendant is the bearer of sentencing judgments, the right of procedural choice should belong to them, i.e., the decision on whether the court uses the sentencing system. As for whether the procuratorate’s use of AI requires the defendant’s consent, this should be analyzed on a case-by-case basis: in negotiated justice, sentencing recommendations as part of negotiation should take into account the defendant’s opinion, while in adversarial justice, sentencing recommendations do not require the defendant’s consent. Second, given that China’s sentencing systems are not yet fully mature and cannot cope with cases involving complex facts and numerous circumstances, consideration should be given to limiting their use to minor offenses. The evidence and facts in such cases are relatively clear and would not place excessive demands on the sentencing system. Third, to ensure that the defendant truly exercises their right of procedural choice, the adjudicator should inform them of the use of the sentencing system and its consequences, and confirm whether the defendant sincerely and voluntarily agrees to apply AI-assisted sentencing. Finally, procedural choice does not merely involve the “entry” into the sentencing system but also its “exit.” Freedom includes both positive and negative aspects, and procedural choice is essentially a kind of procedural freedom; therefore, the right of procedural choice should cover both “entry” and “exit.” Some scholars argue that while defendants enjoy the voluntary right to choose the sentencing algorithm decision procedure, such choice is irreversible. The author disagrees. Article 5 of the Opinions on Intelligent Justice stipulates: “All types of users have the right to choose whether to use the assistance provided by judicial AI, and have the right to withdraw at any time from interactions with AI products and services.” To prevent arbitrary withdrawal from causing inefficiency and resource waste, appropriate restrictions may be imposed on exiting procedures, such as requiring the defendant to provide certain evidence or reasons.
Second, clarify the system of algorithm disclosure. Solving the problems of algorithmic black boxes and algorithmic discrimination cannot be achieved without algorithm disclosure; only by opening the black box and explaining its principles can discriminatory sentencing be effectively curbed. Several points should be noted in relation to this system. First, the connotation of algorithm disclosure should be accurately understood. Some views equate algorithm disclosure with algorithm openness, considering it essentially a process of information disclosure and circulation aimed at bridging the “information gap” between the public and technical developers—that is, “perceivability.” In fact, algorithm disclosure also involves explanations of algorithmic principles, logic, and even source code—that is, “interpretability.” Second, algorithm disclosure is directly related to whether and to what extent stakeholders can participate in judicial sentencing, and stakeholders should have the right to apply for its initiation. As a judicial authority, the court has the obligation to ensure the correctness of sentencing decisions and may also initiate disclosure ex officio. Third, algorithm disclosure should follow the principle of necessity. Some argue that sentencing algorithms have a public character and should therefore be disclosed as a general principle in a comprehensive manner. The author believes that sentencing algorithms are not entirely devoid of private attributes (e.g., those co-developed by judicial authorities and enterprises). Moreover, in many cases, the parties have no objection to the sentencing algorithm or only partial objections; universal disclosure is unnecessary and impractical. To balance algorithm openness with the protection of trade secrets, algorithm disclosure should in principle target the disputed parts. Pasquale’s notion of “qualified transparency” also advocates for selective restrictions on the scope of disclosure according to different contexts. Fourth, sentencing algorithms are complex and technical, requiring responsible parties to provide explanations to ensure the effectiveness of disclosure. Technical developers, being responsible for training sentencing algorithms, should bear the obligation to explain their “products” by appearing in court to clarify the training process and operational mechanisms of the algorithm. The procuratorate and the court, as service users, should also explain the steps and methods of their use of sentencing systems and the handling of sentencing outcomes.
Third, improve the systems of expert testimony and technical assistants. The parties, especially the defendants, usually lack professional knowledge and may, due to “psychological coercion,” treat the output of the sentencing system as the standard answer. The monopoly of public authority further worsens the disadvantaged position of the defendant. To achieve “equality of arms,” reliance on expert testimony and technical assistants is necessary. Articles 146 and 197 of the Criminal Procedure Law provide for the systems of expert testimony and technical assistants to strengthen the position of the parties, and these can also be applied in AI-assisted sentencing. Specifically, the selection of training data, the construction of algorithmic models, and the principles of intelligent sentencing all involve specialized issues that cannot be resolved without expertise; expert witnesses and technical assistants serve precisely to bridge the cognitive gap of the parties, and their roles are complementary. In addition, expert witnesses and technical assistants can help stakeholders understand disclosed algorithmic principles, logic, and other technical elements, thereby realizing algorithmic transparency and interpretability. Due to factors such as the conditions for court appearance and litigation costs, these systems still face limitations in judicial practice. Therefore, it is necessary to restrict judicial discretion by specifying certain statutory circumstances for mandatory court appearance and cross-examination, such as when the prosecution and defense have significant disputes over the sentencing algorithm, or when technical developers cannot clearly explain the algorithm.
Fourth, establish a system of intelligent accountability. Human–machine interaction complicates the issue of responsibility. If the specific responsible party cannot be clearly identified, technical developers will lose the incentive to pursue compliance in sentencing systems, and stakeholders’ right to algorithmic remedies will also lack avenues for recourse. The system of intelligent accountability includes the following points. First, AI itself is not a responsible subject; accountability should be sought among legal subjects such as litigants, technical developers, and service users. China’s Ethical Norms for the New Generation of Artificial Intelligence explicitly requires adherence to the principle that humans remain the ultimate responsible subjects. Second, to ensure effective accountability, the developers, users, and usage details of sentencing systems should all be recorded, and judicial documents should include strengthened explanations of intelligent sentencing to enable tracing back to points of accountability for precise responsibility. Third, given the high threshold for determining fault in sentencing systems and the limited evidentiary capacity of litigants, the principle of presumed fault should generally apply, requiring technical developers and service users to prove the absence of obvious fault. Finally, remedies include both procedural and substantive forms. Procedural remedies primarily concern the withdrawal from the use of sentencing systems, initiation of appeal procedures, and retrial upon remand in second-instance proceedings, while substantive remedies include civil compensation and state compensation.
4.2 Mechanism Expansion
Guided by the theory of technical procedural justice, reshaping the institutions of AI-assisted sentencing can accelerate the reform of sentencing standardization. However, the micro-level deficiencies of the legal system and the limitations of its application scenarios mean that the realization of due process still has some distance to go. Mechanism expansion is therefore necessary to coordinate with institutional reshaping and achieve synergistic interaction.
First, the mechanism of collaborative development. At present, sentencing systems adopt a single development model dominated by technical developers: on the one hand, development is mainly carried out through independent research or technology procurement by judicial organs such as procuratorates and courts, with insufficient participation by social forces such as lawyer associations, law schools, and the general public; on the other hand, judicial personnel are only responsible for raising demands, while technical developers fulfill these demands through data training and algorithm modeling. This single development model has created or exacerbated the challenges that AI-assisted sentencing poses to due process, such as the absence of lawyer associations and stakeholders, which leads to the failure of effective participation. Experience in social governance shows that collaborative governance by multiple subjects not only enhances the democratic legitimacy of decision-making and thereby reduces discrimination but also integrates resource advantages and improves governance efficiency. In this regard, some may question whether, if the underlying logic of the procuratorate’s and the court’s sentencing systems is identical, the sentencing results of both will converge, rendering substantive trial hearings meaningless. In fact, only with the participation of multiple actors in the development of sentencing systems can the systems approximate value neutrality and objective fairness through the contestation of discursive power and the balancing of interests. Specifically, participation of multiple actors—especially defendants and defense lawyers—in the development of sentencing systems can introduce the defense perspective into data training and model construction, correcting the prosecutorial tendencies of the single development model. Moreover, through communicative interaction during collaborative development, multiple actors can reach consensus, ensuring that AI continues to develop in a benevolent direction and adheres to the principle of “human-centeredness.”
Second, the mechanism of algorithm filing. Algorithm disclosure and intelligent accountability mainly target sentencing algorithms in individual cases. Thus, institutional reshaping can only achieve procedural justice in a case-specific and ex post manner, which is insufficient to resolve the challenges of AI-assisted sentencing at their root; algorithm filing mechanisms are needed to provide support. China’s three major regulations on AI governance—the Provisions on the Administration of Algorithmic Recommendations for Internet Information Services, the Provisions on the Administration of Deep Synthesis in Internet Information Services, and the Interim Measures for the Administration of Generative Artificial Intelligence Services—all establish algorithm filing mechanisms. Some may argue that the current filing requirements are limited to algorithms “with public opinion attributes or social mobilization capabilities” and that sentencing algorithms are not included. The author believes, however, that in the iterative development of digital technologies, it is impossible to prescribe all algorithms in a single step—this is evidenced by the continuous expansion of filing scope with the promulgation of new regulations. At present, algorithm filing primarily serves a notification function, allowing only for ex post tracing and accountability when risks are highly apparent or harm has already occurred, with limited punitive force. To implement traceable governance, it is necessary, on the one hand, to introduce license-based filing, requiring substantive review of information such as operating mechanisms and assessment reports, allowing only those algorithms that pass review to be used in judicial sentencing, with regular inspections required. On the other hand, a dynamically adjustable “negative list” should be established, under which sentencing algorithms with risks of discrimination and developers of noncompliant systems are blacklisted, prohibiting the use of such algorithms or barring procurement of their services for a certain period (or even permanently).
Third, the mechanism of algorithmic hearings. The use of sentencing systems in fact runs throughout the entire litigation process, including pre-trial intelligent sentencing recommendations, in-trial intelligent sentencing judgments, and post-trial intelligent sentencing remedies. However, the system of algorithm disclosure primarily applies to the courtroom stage, and thus a dedicated algorithmic hearing mechanism is required. Hearing procedures align deeply with the values of due process such as “the appearance of fairness,” “predictability, transparency, and rationality,” and “participation.” Applying them to the handling of objections to sentencing systems can enhance the acceptability of judicial decisions. Several points merit attention regarding this mechanism. First, starting from stakeholders’ right to information, the organs presiding over hearings at each stage should provide notice, enabling stakeholders to understand the rules, requirements, and consequences of algorithmic hearings and thereby apply in a timely manner. Second, all parties should have the right to apply for algorithmic hearings, express their opinions and claims, and safeguard their legitimate rights and interests. Decision-makers may also initiate algorithmic hearings ex officio. Third, given the professional and complex nature of sentencing algorithms, hearings may introduce both professional groups and social groups: the professional group, composed of legal and technical experts, would review the sentencing system from legal and technical perspectives, while the social group, composed of the general public and public-interest organizations, would assess the social impact of intelligent sentencing outcomes. Finally, algorithmic hearing decisions should be endowed with corresponding legal effect, to prevent the mechanism from becoming hollow due to a lack of enforceable consequences. If, after a hearing, the sentencing system is found to be discriminatory or erroneous—that is, if the objection is upheld—then the intelligent sentencing recommendation may not serve as the basis for judicial decision-making. Conversely, if the objection is dismissed, it may directly serve as the basis for judicial decision-making.
Fourth, the mechanism of judicial training. It is precisely because judicial personnel do not understand the principles of sentencing algorithms that, under the influence of “psychological coercion,” they may directly accept intelligent sentencing outcomes, leading to a distortion whereby technology dominates justice. Under the theory of technical procedural justice, reshaping the systems of algorithm disclosure, expert testimony, and technical assistants can partially mitigate these challenges but do not constitute a fundamental solution. An increasing number of scholars advocate for training or recruiting compound judicial personnel to cope with judicial activities in the digital age. In the author’s view, establishing a judicial training mechanism is not only feasible but necessary. Judicial personnel need not attain or surpass the level of technical experts; it suffices for them to master the basic concepts and characteristics of relevant terminology, understand the principles and mechanisms of disclosed algorithms, and be capable of rejecting illogical opinions. Specifically, AI experts may be invited to develop courses tailored for judicial personnel, with training outcomes tested through large-scale drills and professional competitions, thereby cultivating a group of experienced compound talents. Then, by summarizing valuable experience from judicial practice, a nationwide case database of intelligent sentencing decisions may be established. To prevent the judicial training mechanism from becoming an additional burden, it should be strictly prohibited to mandate that a single judicial officer master multiple algorithms, or to treat training experience as an indicator of whether annual performance assessments are satisfactory.
Originally published in the Journal of Northeastern University (Social Science Edition), No. 2, 2025. Reprinted with permission from the WeChat public account “Journal of Northeastern University (Social Science Edition).”
Assistant Editor: Yang Ziyue
Executive Editor: Zhao Zerui
Final Proofreading: Ji Weidong