Location : Home > News > News & Events
News & Events
Prof Ji Weidong attended the 2024 Global Developer Pioneers Conference and participated in the "Frontier Forum on Large Models"
2024-04-16 from:CISLS preview:
Prof Ji Weidong attended the 2024 Global Developer Pioneers Conference and participated in the "Frontier Forum on Large Models"



The 2024 Global Developer Conference (2024 GDC) was held in Shanghai from 23rd to 24th March. The Frontier Forum on Large Models, a major event of the conference, was held on the morning of 24 March on the west coast of Xuhui, Shanghai. Ji Weidong, Senior Professor of Liberal Arts and Member of the University Council of Shanghai Jiao Tong University, President of the China Institute for Socia-Legal Studies, President of the Computational Law Branch of the China Computer Federation (CCF), and Director of the Center for AI Governance and Law, attended the forum and participated in the dialogue session as a guest.


As a guest of the second dialogue sessionof the forum, Professor Ji Weidong discussed the topic of "Big ModelGovernance: Technology, Ecology, and the Future" with Zhang Qi, Professorof School of Computer Science, Fudan University, Zhang Rong, Chief SecurityOfficer of Tianyi Lab, Alibaba Cloud, and Ma Xingjun, Researcher and DoctoralSupervisor of Fudan University, under the chairmanship of Wang Yingchun,Associate Researcher of the Governance Research Center of the ShanghaiArtificial Intelligence Laboratory (SARCGL).



In the guest dialogue, moderator Wang Yingchun firstly asked a question about safety in the era of big models. In response, Professor Ji Weidong said that "how to deal with the tension between security and development in the era of big models" is a pressing issue for big model governance and AI legislation. Although people seem to have slowly accepted the widespread use of big models and AI since the UK judiciary allowed judges to use AI to write rulings on 12 December 2023, their safety and security have always been a hotly debated topic in all sectors of society. On 13 March 2024, the EU Parliament voted to pass the Artificial Intelligence Bill, which sets out comprehensive rules for developers of AI systems as an attempt to safeguard AI. However, in response to this, more than 150 European tech companies have joined together to issue a public boycott letter, arguing that it will slow down the development of AI in Europe. According to relevant research, the implementation of the EU's Artificial Intelligence Act will eat up 17% of the investment in artificial intelligence, seriously hindering the EU's digitalisation process. In addition, although the safety of AI is collectively known as "safety" in Chinese, there is a strict distinction between "safety" and "security" in English. These two words often appear together in the ethical principles of AI issued by various countries. "Safety" tends to emphasise the "stable state" of a system, organisation or individual functioning normally according to its own mechanism, and its value goal is the stable operation of the system itself. Therefore, what stressed by safety can be considered as one of the foundations of the performance of a large model, because whether or not to maintain the stability of the system itself is also one of the important indexes for the evaluation of the performance of a large model. On the other hand, "security" emphasises the protection from external "intentional or malicious" harm, and taking necessary measures to prevent such harm from occurring. That is why the term "public security" is used. This means that "security" requires knowledge of potential external threats and the adoption of appropriate countermeasures and precautions to safeguard them.



After several other dialogue experts expressed their views on the safety of AI, moderator Wang Yingchun asked "how to reconcile the differences in AI governance between different countries and regions and promote international cooperation on AI". Prof Ji Weidong responded that "focusing on communication and procedure, and emphasising legal pluralism" is an important path to promote pluralistic governance of AI. Taking AI ethics and legislation as an example, if AI ethics is regarded as a universal substantive moral rule, and the goal of AI legislation is to formulate a set of globally applicable substantive laws, then it is impossible to realise international cooperation in AI ethics and AI legislation, and the plurality of AI governance cannot be reflected. Therefore, it is necessary to emphasise the process of communication between multiple subjects in response to the multiple values of AI governance, to distinguish between AI ethics, which is geographically, situationally and country-specificly differentiated, and AI ethics, which is communicatively negotiated in response to such differentiation, and to understand AI ethics as a procedural rule for dealing with the meaning of moral norms used for evaluating the reasonableness of decision-making in the face of uncertainty. Moreover, communication and procedure are equally interesting entry points to the question of AI interpretability. For example, a study entitled "Understanding the dilemma of explainable artificial intelligence: a proposal for a ritual dialog framework" was recently published in Humanities and Social Sciences Communications, a journal of Nature Publishing. The study captures and explains from a human-computer interaction perspective the issue of users' "trust" in AI, which is affected by the dialogue between the user and the creator of the AI model. Therefore, the study argues that a Ritual Dialogue Framework (RDF) is needed to enhance user trust in XAI. The importance of a ritual dialogue framework is that understanding should be teleological, regardless of the nature of the explanation. The Ritual Dialogue Framework builds on the social context in which AI models are developed, realising that the trustworthiness of AI systems is often based on society's collective assessment of technology, authority and shared experiences. Thus, the ritual dialogue framework is not only a communication strategy, but also marks a unique 'ritual'. Ritual dialogue frames here act as ritual norms that mediate between understanding and trust, helping to achieve trust between users and interpretable AI and its makers.


预览: {$content}