Can we standardise the frontier of AI?

Can we standardise the frontier of AI?

June 9, 2025

Huw Roberts and Marta Ziosi

View Journal Article / Working Paper >

International standards have been promoted as a mechanism that can support the governance of advanced AI through explicating national-level regulations, supporting interoperability between different jurisdictions, and guiding best practices. However, there is significant ambiguity about what should be standardised for advanced AI systems, when, and by whom.

In this paper, we explore the possibilities and limitations of international standards as a governance tool for advanced AI. Given the breadth of issues standardising advanced AI covers, we focus on the case study of standards designed to address fairness and bias in large language models (LLMs) and use it to draw broader theoretical insights. To explore this case, we conducted 50 interviews with AI standards experts to understand how the institutional competencies of standards development organisations (SDOs) and the characteristics of advanced AI influence the efficacy of international standards as a governance tool.

Interviewees highlighted that international SDOs have a strong reputation, robust processes, broad international representation, and track record of producing impactful standards. However, they face challenges regarding adoption, participation, governing value-laden issues, and the speed and complexity of technological development. Based on this, we argue that traditional SDOs should not “standardise the frontier” per se and should instead focus on governing well-established “trailing-edge” issues, while developing regulatory intermediary partners to address “leading-edge” technical issues.

Image for Toward Resisting AI-Enabled Authoritarianism

Toward Resisting AI-Enabled Authoritarianism

May 28, 2025
Image for In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?

In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?

May 23, 2025