[author]LI Xueyao
[content]
Interdisciplinary Construction of AI Ethics Institutions: The Perspective of Complex Adaptive Systems
LI Xueyao
Professor, Kaiyuan Law School of Shanghai Jiaotong University
Abstract: For national AI regulatory bodies, transforming AI ethics from a moral principles framework into actionable, predictable, and technologically implementable ethical norms is an urgent practical issue. As the dominant theoretical and policy-making guideline in scientific and technical ethics, Principlism is increasingly unable to fully address the rapid development of technology. It requires adaptation based on the technical characteristics of artificial intelligence and practices such as value alignment and ethics auditing. The article “The Application, Challenges, and Beyond of Principlism in AI Ethics” elaborates on the current state and dilemmas of applying Principlism in AI ethics and proposes theoretical modifications using evolutionary psychology and sociotechnical systems theory. In addition to expanding on these approaches, the subsequent discussion integrates theories like behavioral law and economics and Complex Adaptive Systems (CAS) to preliminarily construct a framework for “Complex Adaptive Systems Ethics.” This aims to address the logical coherence between practical operability and theoretical consistency in the construction of AI ethics systems.
Keywords: Principlism; Complex Adaptive Systems; Evolutionary Psychology; Behavioral Law and Economics; Sociotechnical Systems Theory
1.Reconsidering the Framework of Principlism Theory
Principlism, introduced by Tom L. Beauchamp and James F. Childress in the late 1970s, serves as the theoretical foundation and methodological framework for bioethics as a distinct discipline. Over time, it has found extensive application in both academic research and practical fields, such as artificial intelligence (AI) ethics and biomedical ethics. Despite the diverse formats in which AI ethical principles are presented globally, their methodological structures remain consistent with the characteristics of principlism. In essence, recent theoretical and institutional advancements in AI ethics have predominantly occurred within the framework of principlism.
Principlism adopts a pluralistic, non-foundationalist framework where no single philosophical foundation is posited for individual principles. Moreover, it refrains from asserting that these principles hold absolute authority in all contexts. This flexibility allows the principles to be weighed and adjusted to suit specific circumstances without necessitating a universal philosophical justification for each.
Ethical principles in AI frameworks often mirror this characteristic. For instance, principles such as improving human well-being and enhancing quality of life draw from utilitarianism; interpretability, privacy protection, and accountability are rooted in deontology; and trustworthiness and responsibility stem from virtue ethics. Norms like anti-bias, fairness, transparency, and inclusiveness are commonly shared across various ethical frameworks.
Although principlism is characterized by its non-foundationalist stance, it invokes public morality theory to underscore the universality of its principles. In the context of AI ethics, many principles and standards are designed with universality in mind. Examples include the Ethics Guidelines for Trustworthy AI by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) and ethical frameworks proposed by UNESCO and other global organizations. These initiatives claim that principles such as privacy protection, fairness, and transparency are universally applicable to AI development and deployment across cultures and borders, reflecting the ethos of public morality theory.
Reflective Equilibrium Without Hierarchical Sequencing. Principlism avoids fixed hierarchical sequencing among principles, relying instead on a reflective equilibrium mechanism to balance competing ethical considerations in specific scenarios. This mechanism does not rigidly adhere to external rules or theories but achieves moral balance through iterative interaction between case-specific contexts and moral principles.
This approach differs significantly from mainstream legal theories like analytical jurisprudence, which emphasize logical coherence to ensure systemic certainty. While legal principles may hold a higher level of authority compared to specific rules, they lack the symmetrical factual and normative conditions of rules. In legal application, principles are often constrained by universality, proportionality, and systematic requirements.
Principlism prioritizes a problem-oriented approach, focusing on the practical application of principles in complex and dynamic environments. Its goal is to provide actionable guidance for resolving ethical dilemmas rather than engaging in abstract theoretical debates. As noted, “principlism is not designed for academic philosophers or ethicists concerned with theoretical self-consistency.”
AI ethical principles are typically developed to address concrete issues and technical challenges. For example, privacy and data protection principles respond to concerns raised by the advancement of big data and machine learning, while calls for fairness, transparency, and accountability aim to mitigate bias and discrimination in AI decision-making. This practical orientation aligns with principlism’s commitment to problem-solving.
Principlism remains a robust and adaptable framework for addressing ethical issues across diverse domains, including the rapidly evolving field of AI. Its pluralistic, universalist, reflective, and problem-oriented characteristics ensure its relevance in crafting ethical solutions to contemporary challenges.
2.The Revisionist Approach to Evolutionary Psychology
Evolutionary psychology, by exploring the evolutionary mechanisms underlying human moral behavior, can provide support for principlism, helping to address key issues related to cultural diversity, universality, and moral intuitions. This enhances its effectiveness in tackling real-world moral challenges.
First, the evolutionary foundations of moral principles help explain their origins. The core principles of autonomy, non-maleficence, beneficence, and justice have long been at the center of ethical debates, particularly regarding their applicability across cultures and the reasons for their universality. Evolutionary psychology, by exploring the evolutionary basis of moral behavior, offers an explanation for the universality of these principles throughout human history, thereby providing a theoretical backdrop for principlism. For instance, the principle of autonomy can be understood as stemming from the evolutionary process, where an individual’s ability to make independent decisions and control their own destiny increased their survival prospects, creating an evolutionary advantage. Similarly, the principles of non-maleficence and beneficence can be linked to the necessity of group cooperation. Maintaining cooperative relationships is vital for the survival of social groups, making the reduction of harm and mutual aid—represented by beneficence—fundamental to the evolution of stable societal structures. The principle of justice can be explained through behaviors of reciprocity. Research in evolutionary psychology further demonstrates that fair behavior promotes social cooperation, reduces group conflict, and ultimately enhances the overall viability of the group. Jonathan Haidt’s Moral Foundation Theory can be particularly useful in explaining why there is a need to construct, and why it is possible to construct, a framework of public morality that integrates deontology, consequentialism, and virtue ethics.
Second, addressing cultural diversity through the lens of moral diversity grounded in evolutionary principles. A significant challenge for principlism lies in addressing moral diversity across different cultural contexts. Evolutionary psychology offers a valuable framework to better understand and respond to this diversity by providing an explanation rooted in evolutionary foundations. Evolutionary psychology recognizes that humans have adapted to varied ecological and social environments throughout their evolutionary history, resulting in the emergence of diverse cultures. Differences among cultures can be interpreted as adaptive strategies tailored to specific environmental conditions. For instance, certain cultures prioritize collective interests, possibly due to historical circumstances requiring stronger collective coordination for survival. Conversely, this adaptive strategy might manifest in other cultures as an emphasis on individual autonomy.
Third, strengthening the reflective equilibrium mechanism through an understanding of moral intuitions. The reflective equilibrium mechanism relies on individuals resolving moral conflicts in specific contexts through reflection and rational deliberation. Moral intuitions and emotions play a crucial role in moral judgment, a central focus of evolutionary psychology. Evolutionary psychology posits that moral intuitions, rooted in intuitivist ethics, arise from rapid reaction mechanisms developed over the course of human evolution. These mechanisms enable humans to make swift decisions in complex environments. According to evolutionary psychology, moral intuitions—such as an instinctive aversion to unfair behavior—are emotional responses shaped by evolutionary adaptation, helping humans quickly identify actions that threaten group cooperation or stability. This understanding of moral intuitions can enhance the theoretical explanatory power of principlism’s reflective equilibrium mechanism by acknowledging the role of moral emotions in everyday moral judgments. It encourages a broader consideration of emotions and intuitions in moral decision-making, rather than relying solely on purely rational reflection.
Fourth, Enhancing Universality: Explaining why certain ethical principles are shared across cultures. Principlism emphasizes the universality of its moral principles, but this claim is often challenged in the context of multiculturalism. Evolutionary psychology suggests that some fundamental moral behaviors—such as cooperation, anti-deception, trust, and punishment of injustice—have been preserved through natural selection during human evolution. These behaviors enable human groups to maintain stability and cooperation, which explains their cross-cultural presence. Evolutionary psychology helps clarify why certain ethical principles can be applied across cultures, thereby strengthening principlism’s claim to universality. This makes principlism more persuasive in global and cross-cultural ethical discussions.
Fifth, explaining moral transformation and adaptation: dynamically adjusting moral principles. Human society is constantly evolving, and ethical norms change in response to shifts in society, technology, and the environment. Evolutionary psychology can provide an explanatory framework for moral transformation, aiding principlism in maintaining flexibility and adaptability in the face of rapidly changing societies. For instance, emerging technologies like artificial intelligence and genome editing introduce new ethical challenges. Evolutionary psychology can help principlism understand how morality adapts to these changes. Similarly, technological advancements alter modes of cooperation and patterns of moral interaction among individuals, potentially requiring reinterpretation and adaptation of traditional moral principles. By leveraging the dynamic understanding of societal changes offered by evolutionary psychology, principlism can develop more flexible moral frameworks to adapt to ever-changing environments and technological challenges.
3.The Revisionist Approach to Socio-Technical Systems Theory
The Sociotechnical Systems Theory emphasizes the interaction and coordination between technological systems and social systems. With the development of artificial intelligence (AI) and machine learning technologies, machines are no longer merely tools; their behaviors and decision-making capabilities increasingly resemble those of “autonomous” systems. This necessitates a re-evaluation of their roles within sociotechnical systems through ethical theories, including principlism. Here, the theoretical development of Sociotechnical Systems Theory, primarily framed within Machine Behavior, is explored.
First, redefining the principle of autonomy. The principle of autonomy focuses on safeguarding individuals’ rights to self-determination. However, with the widespread application of AI systems, machine decision-making has begun to influence human decision-making autonomy. Within sociotechnical systems, machines are not merely tasked with information processing and data analysis but are also capable of making complex decisions. Under these circumstances, “autonomy” is no longer exclusive to humans but extends to scenarios of human-machine collaboration. Through Sociotechnical Systems Theory, the principle of autonomy can be redefined, expanding its scope from human autonomy to collaborative autonomy between humans and machines. Machine Behavior can provide insights into the roles and limitations of machines in decision-making processes, demanding that AI systems support decision-making without compromising human autonomy. For instance, autonomous driving systems rely on their sensors and algorithms for real-time decision-making, but in emergencies, these systems should be designed to allow human passengers to make the final decision. This collaborative autonomy between humans and machines should become part of the redefined principle of autonomy within principlism.
Second, enhancing transparency and explainability. The ethical principles of transparency and explainability are critically important in artificial intelligence systems. Machine psychology can help us understand the decision-making processes of machines and develop systems that are more transparent and explainable, enabling users to clearly understand why a machine has made a particular decision. This aligns with the principles of explainability and transparency in principlism, allowing these principles to adapt to new technological environments. For example, in medical diagnostics, the diagnostic recommendations provided by an AI model must be transparent and explainable so that physicians can understand the basis of the model’s conclusions and make final decisions based on the specific circumstances of the patient.
Third, reshaping the k principles of fairness and bias suppression. The principles of fairness and bias suppression are critically important in artificial intelligence systems, particularly in data-driven algorithms. Machine Behavior can help analyze machine behavior patterns in complex environments and identify systemic biases that algorithms might generate. By combining Sociotechnical Systems Theory with Machine Behavior, principlism can delve deeper into analyzing and addressing bias issues within AI systems. Through the analysis of machine behavior, system designers can identify and eliminate biases during the development phase, ensuring system fairness. For instance, AI systems used for recruitment may make decisions influenced by gender or racial biases based on historical data. By leveraging insights from Machine Behavior, designers can detect these biases and implement measures to eliminate them, ensuring fairness in AI decision-making processes.
Fourth, emphasizing the Fairness Doctrine and its synergy with sociotechnical systems. Sociotechnical Systems Theory underscores the coordinated development of technological and social systems, and the Fairness Doctrine requires reinterpretation within this context of synergy. Machine Behavior can provide insights into fairness issues arising from interactions between machines and humans within social systems. The Fairness Doctrine should be expanded to include complex sociotechnical systems, ensuring that machines collaborating with humans do not produce unjust outcomes.Machine psychology can assist in designing AI systems that address the fairness needs of different interest groups when tackling social issues. For example, in AI systems for urban planning and resource allocation, it is crucial to ensure that algorithms account for the diverse needs of various social groups, avoiding social inequities stemming from technological factors. This concept of fairness no longer applies solely to individual humans but extends to collaborative interactions between humans and machine systems.
Fifth, the principle of safety and accountability: responsibility allocation in machine behavior. In sociotechnical systems, machines are assuming increasingly significant responsibilities, introducing new challenges regarding safety and accountability. Machine Behavior can help identify and predict potential risks in machine behavior and develop corresponding mechanisms for responsibility allocation. The principles of safety and accountability should be adjusted through Sociotechnical Systems Theory to ensure clear responsibility distribution within complex technological systems.Particularly when AI system decisions may lead to adverse outcomes, Machine Behavior can aid in designing responsibility allocation mechanisms to clarify accountability in cases of accidents or erroneous decisions. For example, in traffic accidents involving autonomous vehicles, it should be clearly determined whether the responsibility lies with the manufacturer, the software designer, or the vehicle itself. Machine Behavior can assist in predicting potential system errors during the design phase and guide the development of appropriate mechanisms for responsibility allocation.
Sixth, enhancing participation to achieve social collaboration in Human-Computer Interaction (HCI). The principle of participation requires that stakeholders have the opportunity to be involved in the decision-making process, particularly in systems involving ethical decision-making. In the design and deployment of artificial intelligence systems, Machine Behavior can help understand how to better facilitate user interaction with the system and encourage participation in system development and feedback.Users should not merely be passive recipients of AI systems but should actively contribute to their design and operation through feedback mechanisms. For example, the design of intelligent healthcare platforms can encourage patients to participate in data feedback and decision-making processes, enabling the system to provide more personalized services tailored to their needs.
4.The Revisions in Behavioral Law and Economics
Behavioral Law and Economics is an interdisciplinary field that integrates behavioral economics and legal analysis. It focuses on cognitive biases and irrational behaviors in human decision-making and examines how these behaviors influence the development of legal systems and policies. This discipline can provide new perspectives for constructing ethical frameworks for artificial intelligence, making them more aligned with actual human behavior and societal needs.
First, improving the practicality of the Reflective Equilibrium mechanism by addressing cognitive biases. The traditional Reflective Equilibrium mechanism in principlism often assumes an idealized framework, failing to account for the complexities of human behavior. The four ethical principles in biomedical and AI ethics—respect for autonomy, beneficence, non-maleficence, and justice—are rooted in normative ethics and legal theory, built on the premise of fully rational decision-making. However, Behavioral Law and Economics research reveals that human decisions are frequently shaped by cognitive biases and social contexts. In this context, Behavioral Law and Economics can offer valuable insights into bounded rationality and behavioral biases, enriching the application of AI ethics. For instance, AI system designers may underestimate or overlook users’ cognitive biases, such as overconfidence, inattentiveness, or information overload. These biases can affect the fairness or autonomy of AI-driven decision-making processes. Incorporating insights from Behavioral Law and Economics allows the Reflective Equilibrium mechanism to become more adaptive, dynamically adjusting to users’ behavioral patterns to ensure that ethical principles remain robust and unaffected by biases in real-world scenarios. Furthermore, the “nudge” theory from Behavioral Law and Economics can enhance decision-making frameworks in AI systems. AI systems can be designed to subtly guide users toward decisions that are beneficial to themselves and society while preserving their freedom of choice.
Second, enriching the empirical foundation of public morality theory and enhancing its explanatory power and practicality. Behavioral Law and Economics highlights the real-world challenges and obstacles humans face in adhering to rules. Recognizing these difficulties, ethical framework designers can adjust how principles are expressed and implemented, making them easier to understand and apply. For instance, when emphasizing principles of fairness and non-discrimination, providing specific operational guidelines and case studies can help AI developers and users better practice these principles. Research in Behavioral Law and Economics can shed light on the public’s genuine attitudes and reactions to AI ethics issues. This understanding aids in crafting ethical frameworks that reflect social consensus, thereby increasing their societal acceptance and legitimacy.
Third, responding more effectively to the need for institutional change driven by technological iterations facilitates the creation of more effective laws and policies. The integration of Behavioral Law and Economics with public morality theory enhances the operability of AI ethics frameworks by addressing key challenges.AI systems are often embedded in complex social and organizational environments where responsibility is dispersed among developers, corporations, users, and regulators. Behavioral Law and Economics can help public morality theory tackle the issue of “diffusion of responsibility.” While all parties bear some level of responsibility, the complexities of individual biases and collective behavior often lead to a lack of accountability. Insights from Behavioral Law and Economics can help create clearer frameworks for assigning responsibility, ensuring each actor fulfills their ethical and legal obligations. It also aids in designing AI systems that align with public morality. By understanding the moral preferences and behavioral patterns of different social groups, AI systems can be tailored to meet public moral expectations. For instance, in the financial sector, AI systems can reflect public morality by studying consumer behavior, such as risk attitudes and cognitive biases, to ensure they do not harm vulnerable groups or exacerbate social inequalities. Furthermore, Behavioral Law and Economics contributes to understanding risk perception and addressing fairness in the distribution of risks and benefits. Public morality theory in AI ethics emphasizes balancing these aspects. Behavioral insights can clarify how different groups perceive risks and benefits, helping to prevent unfair outcomes. For example, autonomous driving technologies prompt debates on public safety and individual responsibility. Behavioral Law and Economics can guide the development of policies that align AI applications with public moral demands while safeguarding individual interests.
5.The Remaining Issues in Evolutionary Psychology, Sociotechnical Systems Theory, and Behavioral Law and Economics
The modification of principlism through evolutionary psychology, sociotechnical systems theory, and behavioral law and economics can address many of the issues traditional principlism faces when confronted with cultural diversity, technological change, and the complexity of real-world decision-making. However, even with the introduction of these interdisciplinary perspectives, some issues remain unresolved.
First, ethical conflicts in the dynamic and unpredictable changes of complex systems. Modern sociotechnical systems are becoming increasingly complex, characterized by uncertainty and systemic risks, especially in fields such as artificial intelligence and financial technology. Small changes in these systems can lead to significant impacts; the related issues are often multilayered, multifactorial, and subject to dynamic changes. As AI systems are rapidly applied in sectors like healthcare, finance, and transportation, new ethical conflicts continually emerge. The fixed framework of principlism struggles to fully adapt to these complex and dynamic changes. While machine behavior theory can help address specific issues, it primarily focuses on localized problems and lacks a systemic perspective to handle cross-system and cross-domain ethical issues and risks. Evolutionary psychology can explain the evolutionary foundations of some core ethical principles, but it cannot effectively address emerging ethical problems in the context of rapid technological change. Similarly, the bounded rationality in behavioral law and economics fails to account for the high levels of uncertainty and nonlinearity faced in moral decision-making.
Second, the issue of the priority of principles in non-foundationalism remains unresolved. While the reflective equilibrium mechanism allows for the reconciliation of conflicts between different principles in specific contexts, in practice, the prioritization of certain principles can be difficult to define clearly, leading to subjectivity and inconsistency in moral decision-making. For example, in cases where autonomy conflicts with public interest, how to balance protecting individual privacy while maximizing social welfare is a complex issue. This delicate balance may result in subjective decision-making due to the ambiguity of rules, ultimately failing to avoid the problem of “ethical bleaching.” The integration of behavioral law and economics can help understand human behavior patterns in irrational situations, but it does not provide clear guidance on the prioritization of ethical principles. Therefore, while behavioral law and economics can optimize the application of individual ethical decisions, it still lacks a consistent standard for principle prioritization.
Third, the ongoing existence of cultural diversity and value conflicts. While evolutionary psychology can explain the universality of certain moral principles in human evolution, values rooted in different cultural backgrounds continue to generate conflicts. For example, the individualistic values in Western cultures and the collectivist values in Eastern cultures may manifest very differently in AI ethics. The universality of principlism may still struggle to fully meet the needs of all parties in such a multicultural context. Although evolutionary psychology provides an evolutionary framework for universality, it cannot completely eliminate value conflicts between different cultures. Particularly in the context of globalization, how to develop a set of effective ethical principles that works across diverse cultural backgrounds remains an unresolved issue.
Fourth, the rapid pace of technological iteration leads to a lag in ethical guidance. The speed of technological development often outpaces the evolution of ethics and law, causing ethical principles to fall short in addressing the challenges posed by emerging technologies. In fields such as artificial intelligence, autonomous weapons, and genetic editing, traditional moral principles may fail to encompass the complexity and risks involved, leaving many ethical gaps. Principlism primarily relies on case analysis in specific contexts for behavioral guidance, lacking a mechanism to quickly adapt to new technologies, which may result in a lag in ethical guidance.
Fifth, systemic risks and collective responsibility. Risks in complex sociotechnical systems are often systemic, involving multiple stakeholders. Principlism tends to focus more on the individual and direct actions, lacking in-depth exploration of moral strategies for addressing systemic and global risks. When multiple actors participate in a system, determining how to allocate responsibility when issues arise becomes a challenge. While behavioral law and economics and sociotechnical systems theory can identify factual “responsibility” through quantitative methods like causal analysis, they remain often localized, individual, and contextual, requiring continuous “theoretical patches” during application.
6. The Integrative Approach to Complex Adaptive Systems Theory
To address the aforementioned remaining issues, it is possible to introduce complex adaptive systems theory (CAS theory), which has generative characteristics, for theoretical integration. Complex adaptive systems theory is a theoretical framework for understanding how systems composed of many interacting individuals form overall behavior through self-organization, nonlinear feedback, and adaptation mechanisms. Its most prominent features are emergence, adaptability, and diversity. Using simulation and other technologies, complex adaptive systems theory has been widely applied in emergency management, urban planning, traffic control, public opinion risk prediction, and cybersecurity, among other fields. In the field of artificial intelligence ethics, it can provide a more comprehensive and dynamic framework, integrating the advantages of evolutionary psychology, sociotechnical systems theory, and behavioral law and economics, making principlism more adaptable to complex, variable, and cross-system ethical situations. The specific advantages can be summarized in the following six points.
First, dynamic adaptability to cope with the evolution of ethical conflicts. Complex adaptive systems theory emphasizes the system's adaptability and dynamic change ability. When facing ethical conflicts in complex systems, principlism can introduce self-adaptive mechanisms from complex systems to make the ethical decision-making process more flexible. For example, based on the self-adaptive mechanisms of complex systems, ethical decision-making can continuously adjust the priority of different principles according to changes in the environment, avoiding the rigidity of a fixed framework. In artificial intelligence ethics, facing rapidly changing social needs and technological development, a dynamically adjustable moral framework can be designed based on complex systems theory, allowing decision-makers to adapt flexibly to new ethical challenges at different stages and in different environments. For example, with the impact of artificial intelligence on autonomy, the meaning of "respecting autonomy" needs to be redefined and understood.
Second, a multi-level decision-making framework to address the ambiguity of principle priority. Complex adaptive systems theory emphasizes the operation of multi-level, multi-factor systems. Therefore, ethical decision-making should not rely solely on the weighing of principles at a single level but can set the priority of principles at different system levels (such as individual, group, society, technology). This can more clearly define the ethical priority order at different levels, solving the problem of ambiguity in principle priority in different situations of principlism. In addition, by emphasizing multi-party consultation and participation, relevant discussion platforms can be established to gather opinions from all stakeholders and negotiate the formulation and modification of ethical norms.
Third, integrating behavioral law and economics and sociotechnical systems theory with systemic risk assessment. Complex adaptive systems theory can assist principlism in constructing a risk assessment mechanism suitable for complex societies, dealing with nonlinear relationships and systemic risks in complex systems, to better cope with systemic risks and ethical uncertainty. For example, by upgrading the risk perception models in behavioral law and economics and combining complex systems theory, an ethical decision-making framework with systemic risk prevention capabilities can be developed in high-risk areas such as finance and healthcare.
Fourth, coping with cultural diversity conflicts by cultural diversity adapting and co-evolution. The concept of co-evolution in complex systems theory can be used to explain how moral norms under different cultural backgrounds form a dynamic balance through interaction and collaboration. Through complex adaptive systems theory, principlism can design a moral framework with co-evolutionary characteristics, allowing ethical norms under different cultural backgrounds to adapt and adjust to each other, ultimately forming a relatively stable ethical consensus. In the global governance of artificial intelligence ethics, complex systems theory can help form a more flexible ethical synergy mechanism between different cultures, legal systems, and moral values, avoiding rigid decision-making due to value conflicts.
Fifth, responding to the need for penetrating theory in practice with multi-level theoretical integration. The artificial intelligence ethics framework of complex adaptive systems has the advantage of integrating micro, meso, and macro perspectives, thus responding to the need for effective "principle-rule-application" penetrating theory in artificial intelligence ethics practice. At the micro level (individual), evolutionary psychology and cognitive neuroscience can be used to help understand the moral decision-making process and behavioral tendencies of individuals; at the meso level (organization and technology), behavioral law and economics and sociotechnical system analysis can be used to analyze organizational behavior, technological development, and social impact; at the macro level (social institutions), the existing research achievements of complex adaptive systems theory and computational social science can guide the overall design and dynamic evolution of ethical systems.
Sixth, a multi-element theoretical mechanism. This theoretical mechanism includes not only agents, interaction mechanisms, adaptation and learning mechanisms but also compliance behavior incentive mechanisms, non-compliance behavior restraint mechanisms, responsibility subject identification mechanisms, and adaptive legislation mechanisms. Among them, agents include individuals (developers, users), organizations (enterprises, governments), and technical entities (artificial intelligence systems); the content of interaction mechanisms mainly refers to the interaction between agents through social, technical, legal, and economic means, forming a complex network; adaptation and learning mechanisms mainly refer to agents adjusting behavior according to feedback and environmental changes, and ethical systems also evolve accordingly.
In summary, through the integration of complex adaptive systems theory, the transformation ideas of evolutionary psychology, sociotechnical systems theory, and behavioral law and economics can be further deepened and coordinated. Complex adaptive systems theory can solve the problems left by principlism in dealing with dynamic ethical conflicts, systemic risks, and multicultural conflicts, enhancing its adaptability and flexibility in complex sociotechnical environments.
Conclusion
How to find the position of Luhmann's systems theory of law in the study of artificial intelligence ethics?
At this point in the text, a question needs to be addressed: Why not use Luhmann's systems theory of law, which also adopts an interdisciplinary and complex evolutionary perspective and has a mature research framework in legal theory, instead of turning to complex adaptive systems theory, which is not commonly borrowed by law and ethics? There are three main reasons: First, Luhmann's theory aims to explain the structure and function of social systems, providing a macro sociological analysis tool. Principlism aims to guide individual-level moral decision-making, focusing on specific ethical dilemmas and practical applications. Since the two focus on different levels, Luhmann's theory lacks direct guidance for individual moral decision-making and is not suitable for transforming principlism, which is centered on moral principles. Second, Luhmann's theory views law and morality as two separate systems, with the operation of the legal system being independent of the moral system. Principlism regards moral principles as the core of decision-making and believes that morality has a direct guiding effect on law and practice. This different understanding of the relationship between morality and law makes Luhmann's theory difficult to directly use to enhance the guidance of principlism on moral decision-making. Third, Luhmann's systems theory is descriptive, not normative. It explains how systems operate but does not provide moral guidance on how one should act. Principlism requires specific moral norms to guide practice, so Luhmann's theory cannot meet its needs in this regard.
The issues pointed out above do not mean that Luhmann's systems theory of law has no role in the construction of AI ethics institutions. Applying complex systems theory to the field of AI ethics requires borrowing from Luhmann's systems theory of law, which uses systemic thinking to discuss existing achievements in basic rights, morality, and procedures. For example, how to incorporate systemic considerations into moral judgments, how to deal with complexity and uncertainty, and how to consider the impact of multiple levels and factors; helping to understand how moral principles function in complex social systems and how to communicate and coordinate between different systems.
The theoretical debates and problem awareness surrounding principlism in applied ethics are, in a sense, a parallel universe to law. For example, the basic rights listed in the constitution and their implementation mechanisms are very similar to the mechanisms of principlism. Rights balancing, the principle of proportionality, analogical reasoning, and consequence-oriented approaches in legal hermeneutics are all specific technical solutions for realizing basic rights or fundamental principles within the legal system. Legal norms need to evolve continuously to cope with the complexity of the external environment and generate internal complexity that reduces external complexity. The basic rights in the sense of the constitution and the four core principles of principlism are nothing more than detection and reflection devices formed within the normative system to adapt to changes in the external environment. Therefore, the relevant thinking achievements at the level of legal philosophy can naturally be replicated and applied in applied ethics, especially in the field of technology ethics. Luhmann's systems theory of law goes beyond the applicability of Dworkin, Alexy, and others who view principles as values that can be weighed, arguing that principles and rules are merely premises that constrain the making of legal decisions to different degrees of abstraction, thus greatly broadening the perspective on the applicability of legal principles. Similar achievements in systems theory of law can also provide direct and profound references for the application of complex systems theory in the construction of AI ethics institutions.