Professor Ji Weidong Invited to Participate in World Artificial Intelligence Conference Panel on Global AI Governance
At the invitation of Professor Xue Lan, Dean of the Institute for AI International Governance, Tsinghua University, Professor Ji Weidong—Dean of the China Institute for Socio-Legal Studies, at Shanghai Jiao Tong University and Director of the AI Governance Research Center—attended the plenary session on AI Development and Security at WAIC 2025 on the afternoon of July 26, 2025. He participated in the panel discussion titled "AI Global Governance: Mechanism Building and Best Practices."
The plenary session commenced with a fireside chat between Turing Award laureate Andrew Chi-Chih Yao and Nobel Laureate in Physics Geoffrey Hinton. Following keynote speeches by fellow Turing Award recipients Yoshua Bengio and David Patterson, the session featured two panel discussions. The panel discussion on AI global governance was moderated by Professor Xue Lan. Speakers included Professor Ngaire Woods, Founding Dean of the Blavatnik School of Government at the University of Oxford; Professor Ji Weidong; Bernardo Mariano Jr., Assistant Secretary-General and Chief Information Officer of the United Nations (temporarily absent); Mr. Hu Guodong, Deputy Party Secretary of the China Center for Information Industry Development and Director of the Ministry of Industry and Information Technology Key Laboratory for AI Scenario Applications and Intelligent System Evaluation; Mr. Craig Mundie, former Chief Research and Strategy Officer at Microsoft; Mr. Liu Ziqian, Chief Network and Information Security Expert at China Telecom; and Ms. Liu Xiangwen, Vice President of Alibaba Cloud Computing Co. Ltd.
When asked about the most significant advancement and the most profound risk in AI's future, Professor Ji Weidong paused briefly before noting: Brain-computer interface technology is transforming humanity itself, achieving a degree of human-machine integration. This context, of course, refers to Neuralink implanting chips in individuals with disabilities, enabling them to accomplish previously impossible tasks through thought alone—even achieving a degree of freedom of movement. It is projected that by 2028, humanity may experience a renaissance through interconnection with AI. While rapidly evolving artificial intelligence brings benefits, it also introduces risks, anxieties, and even potential harm to human society. From a legal perspective, digital sovereignty will lead to divergent AI policies and laws among nations, creating obstacles for international cooperation in AI governance and hindering effective risk prevention—a risk with far-reaching implications in itself.
Professor Ji Weidong further emphasized that since May 2025, many nations have shifted their stance toward prioritizing AI development over safety. For instance, the U.S. House of Representatives passed HR1, prohibiting state and local governments from regulating AI models, systems, or automated decision-making tools for the next decade. Japan and South Korea have both enacted laws promoting AI-related technology R&D and application. Even the European Commission is preparing to delay the implementation of its AI Act, which prioritizes safety over development. Meanwhile, heads of government from multiple EU member states have jointly called for establishing a "Pause the countdown mechanism." Against this backdrop, the "Collingridge dilemma"—the challenge of preventing technological risks—has resurfaced, demanding consideration at the global governance level.
He further noted that within the next three to five years, AI governance and legislation will inevitably become a global issue, demanding heightened attention and concerted international cooperation. The United Nations and its specialized agencies serve as platforms for voicing and listening to diverse perspectives. We hope to leverage such open, pluralistic forums to seek the greatest common denominator in AI governance. In 2024, the United Nations adopted three AI-related resolutions. The first is the Resolution on the Safety and Trustworthiness of Artificial Intelligence, the second is the Resolution on Strengthening International Cooperation on Artificial Intelligence Capacity Building, and the third is the Future Pact along with its annex, the Global Digital Compact. To implement these three resolutions adopted by the United Nations in 2024, it is necessary to establish AI system standards and governance frameworks that can build international consensus.Beyond the United Nations University Global Artificial Intelligence Network, no specialized international body currently exists dedicated to AI governance. Given the rapid pace of AI development, its widespread applications, and its profound impact on human society, we may require a specialized organization akin to an international atomic energy agency. Such an entity would be tasked with addressing AI-related risks and challenges, formulating concrete international rules and standards, and overseeing their implementation.
In his concluding remarks, Professor Ji Weidong drew attention to the 2024 Beijing International Consensus on AI Safety, which calls on governments and enterprises to allocate one-third of their AI R&D budgets to security measures. He argued this effectively proposes a technical solution for international AI governance. He therefore called upon governments and industry to shift AI governance from an ethics-centric to a technology-and-process-centric approach, increase investment in AI security R&D itself, and advance toward higher ethical principles in AI governance by ensuring that technological procedural fairness underpins legal procedural fairness.
Following the panel discussion, over a dozen industry representatives—led by Mr. Yu Xiaohui, President of the China Academy of Information and Communications Technology and Secretary-General of the China AI Industry Development Alliance, and Mr. Wang Yingchun, Co-Director of the Security and Trustworthy AI Center at the Shanghai AI Laboratory—unveiled the China AI Security Commitment Framework. This announcement brought the plenary session on AI development and security to its climax.
Additionally, on the morning of the 27th, Professor Ji Weidong attended the closing international seminar of the China AI Development and Security Research Network, chaired by Ambassador Fu Ying, as well as a networking luncheon for SenseTime's international cooperation signing ceremony.

