Open Problems in Frontier AI Risk Management

Open Problems in Frontier AI Risk Management

February 24, 2026

Authors: Marta Ziosi, Miro Plueckebaum, Stephen Casper, Henry Papadatos, Ze Shen Chin, Peter Slattery, James Gealy, Tim G. J. Rudner, Brian Tse, Ariel Gil, Patricia Paskov, Maximilian Negele, Rokas Gipiškis, Nada Madkour, Vera Lummis, Rupal Jain, Luise Eder, Kristina Fort, Malou C. van Draanen Glismann, Inès Belhadj, Amin Oueslati, Anna K. Wisakanto, Richard Mallah, Koen Holtman, Ranj Zuhdi, Daniel S. Schiff, Jessica Newman, Malcolm Murray, Robert Trager

View Journal Article / Working Paper >

Frontier AI systems – general-purpose systems capable of performing a wide range of tasks – bring a set of safety risks which risk management can help tackle. However, most AI-specific risk management standards were developed for narrow AI systems, before the advent of frontier AI. Frontier AI both amplifies existing risks and introduces qualitatively novel challenges. Not only is there a notable lack of stable scientific consensus resulting from the rapid pace of technological change, but emerging frontier AI safety practices are often misaligned with, or may undermine, established risk management frameworks. To address these challenges, we systematically surface open problems in frontier AI risk management. Adopting a problem-oriented approach, we examine each stage of the risk management process – risk planning, identification, analysis, evaluation, and mitigation – through a structured review of the literature, identifying unresolved challenges and the actors best positioned to address them. Recognising that different types of open problems call for different responses, we classify open problems according to whether they reflect (a) a lack of scientific or technical consensus, (b) misalignment with, or challenges to, established risk management frameworks, or (c) shortcomings in implementation despite apparent consensus and alignment. By mapping these open problems and identifying the actors best positioned to address them – including developers, deployers, regulators, standards bodies, researchers, and third-party evaluators – this work aims to clarify where progress is needed to enable robust and meaningful consensus on frontier AI risk management. The paper does not propose specific solutions; instead, it provides a problem-oriented, agenda-setting reference document, complemented by a living online repository, intended to support coordination, reduce duplication, and guide future research and governance efforts.

Image for Financing the AI Triad: Compute, Data and Algorithms A framework to build local ecosystems

Financing the AI Triad: Compute, Data and Algorithms A framework to build local ecosystems

February 18, 2026
Image for Legal Alignment for Safe and Ethical AI

Legal Alignment for Safe and Ethical AI

January 15, 2026