Verification for International AI Governance

Verification for International AI Governance

July 3, 2025

Ben Harack, Robert F. Trager, Anka Reuel, David Manheim, Miles Brundage, Onni Aarne, Aaron Scher, Yanliang Pan, Jenny Xiao, Kristy Loke, Sumaya Nur Adan, Guillem Bas, Nicholas A. Caputo, Julia C. Morse, Janvi Ahuja, Isabella Duan, Janet Egan, Ben Bucknall, Brianna Rosen, Renan Araujo, Vincent Boulanin, Ranjit Lall, Fazl Barez, Sanaa Alvira, Corin Katzke, Ahmad Atamli, and Amro Awad

View Journal Article / Working Paper >

The growing impacts of artificial intelligence (AI) are spurring states to consider international agreements that could help manage this rapidly evolving technology. The political feasibility of such agreements can hinge on their verifiability—the extent to which the states involved can determine whether other states are complying. This report analyzes several potential international agreements and ways they could be verified. To improve the robustness of the conclusions, pessimistic assumptions are made about the technical and political parameters of the verification challenge.

This report has three primary findings. First, verification of many international AI agreements appears possible even without speculative advances in verification technology. Some agreements can be verified using existing hardware, while others will require major investments in developing and installing verification infrastructure. In particular, verifying the regulation of data center-based AI development and deployment appears to be possible within a few years if serious efforts are made toward that goal. One such scheme would require 1) constructing and installing narrow-purpose verification hardware in data centers and 2) creating a mutually verified data center which can run privacy-preserving computations. Second, verification for some kinds of AI-related activities is likely to face a combination of technical and political barriers, thus limiting prospects for agreement. In particular, the detailed regulation of mobile AI-enabled devices in sensitive domains—such as weapons—faces severe political challenges. Third, near-term actions in several areas, including research and development as well as state policy, can improve the prospects for future verification agreements by reducing costs and security concerns. In sum, this report outlines workable approaches for verifying international AI agreements and illustrates how investments in verification today can shape the political possibilities of tomorrow.

Image for Risk Tiers: Towards a Gold Standard for Advanced AI

Risk Tiers: Towards a Gold Standard for Advanced AI

June 16, 2025
Image for Can we standardise the frontier of AI?

Can we standardise the frontier of AI?

June 9, 2025