Proposal for UN Independent Scientific Panel on AI: Balancing Rigor and Legitimacy

Proposal for UN Independent Scientific Panel on AI: Balancing Rigor and Legitimacy

May 2, 2025

Julia C. Morse, Robert Trager & Ranjit Lall

View Journal Article / Working Paper >

With artificial intelligence (AI) systems rapidly evolving, it has become increasingly important to ensure that countries share a common understanding of the capabilities, risks, opportunities, and challenges of these systems. In recognition of this significant policy task, countries agreed to create an independent international scientific panel on AI as specified in the UN Global Digital Compact in September 2024. Negotiations are currently ongoing as to the institutional form and function of this new panel. Drawing on lessons from nuclear proliferation, civil aviation, climate change, and other areas of global governance, and building on the co-facilitators’ 19 March 2025 zero draft, this analysis suggests a possible design for such a panel.

The paper expands on the zero draft’s proposed governance framework and suggests fleshing it out as follows:

  • Design: We recommend the panel include an AI Advisory Council (40 member states), an AI Expert Committee (15-20 technical experts, with some assigned to specific topics and areas), and 4 working groups (15-20 technical experts) covering the topics of AI capabilities and risks, AI impacts, AI accessibility, and AI and the sustainable development goals (SDGs).
  • Staffing: We recommend the AI Expert Committee be staffed with scientists, scholars, and technical experts with no direct ties to either industry or government. Experts would be nominated through an open process and the UN Secretary-General would put together a slate, to be approved by the Advisory Council.
  • Multistakeholder Engagement: We recommend that the AI Expert Committee maintain an industry advisory council and a civil society advisory council. An additional recommendation proposes allowing AI working groups to include subject-matter experts with ties to industry (no more than 3 per working group).
  • Timing of reports: We suggest each working group produce a semi-annual reports and special ad hoc reports on timely topics of interest. We suggest that both semi-annual and ad hoc reports be released publicly as preliminary drafts upon working group approval and as final reports upon consultation with the AI Expert Committee. We suggest that the AI Expert Committee draft the annual report, basing the summary document on the working group semi-annual reports and updates.
  • Annual Report Approval: We recommend the AI Advisory Council have the opportunity to review the report and add footnotes of dissenting opinions. To encourage political buy-in, we suggest the Advisory Council also have the ability to revise report language (subject to ⅔ majority approval on each amendment). The Advisory Council would also approve the final report, aiming for consensus but subject to ⅔ majority if necessary.
  • Relationship with Global Dialogue: We recommend that the panel release the annual comprehensive report to correspond with the timing of the Global Dialogue.
Image for Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities

Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities

March 11, 2025
Image for Looking ahead: Synergies between the EU AI Office and UK AISI

Looking ahead: Synergies between the EU AI Office and UK AISI

March 11, 2025