Frontier AI poses significant safety risks (Bengio et al., 2026). It broadens access to tools for generating deceptive or harmful content (Achanta, 2025), exacerbates national security threats by enabling sophisticated offensive cyber capabilities (Moix et al., 2025, Potter et al., 2025), heightens inequalities through biased outputs (Gallegos et al, 2023), to cite a few. Traditionally, risk management offers a useful framework to identify, analyse and mitigate safety risks. Risk management processes operate at multiple levels: through high-level principles and processes for managing risks to organisations (e.g., ISO 31000:2018); through sector-specific standards for managing risks associated with particular classes of products (e.g., ISO 14971:2019 for medical devices); through guidance on selecting among relevant risk assessment techniques at different stages of the risk management process (e.g., IEC 31010:2019); and through overarching frameworks for integrating safety considerations across the risk management process (e.g., ISO/IEC Guide 51:2014).
In the context of AI, existing risk management standards primarily address narrow AI systems (e.g., ISO/IEC 23894:2023, ISO/IEC 42001:2023). These instruments were largely developed prior to the emergence of ‘frontier’ or ‘general-purpose’ AI: ‘AI systems that learn patterns from large amounts of data, enabling them to perform a variety of tasks’ (Bengio et al., 2026, p.17). This development both amplifies existing risks and introduces qualitatively novel challenges. Not only is there a notable lack of stable scientific consensus resulting from the rapid pace of technological change (Roberts & Ziosi, 2025); safety practices for frontier AI that are emerging are not fully aligned with, or may even undermine, established risk management processes (Schuett et al., 2023; Koessler & Schuett, 2023). Concurrently, improvements to and proposals for frontier AI risk management are being pursued along several distinct fronts. These include easily updatable, specific technical guidance (e.g., FMF, n.d.; UK AISI, n.d.), mapping the existing consensus on AI safety risks and practices (e.g., Bengio et al., 2026), independent proposals (e.g., Barrett et al., 2025; Campos et al., 2025; Shanghai AI Lab & Concordia AI, 2025) and regional efforts (EU Commission, 2025; NIST, 2024, CAC, 2025). Without a cohesive effort to systematically surface the challenges that frontier AI poses to risk management, however, these initiatives risk relying on flawed assumptions about the state of the field, they may fail to deliver targeted and meaningful progress, generate duplicative work, and create confusion or divergence over what should be applied in which contexts.
To address this, we propose to systematically surface open problems in the field of frontier AI risk management. We take a problem-oriented approach to advance the field by shedding light on what needs addressing, historically common in other disciplines (e.g., Hilbert, 1900), and recently used to advance other emerging challenges in frontier AI (Reuel et al., 2024; Casper et al. 2025, Barez et al., 2025, Sharkey et al., 2025). Our goal is twofold: 1) to highlight which challenges must be addressed such that meaningful and robust consensus on AI risk management can be pursued and 2) to pave the way for future solutions by formulating research questions and pinpointing which actors ought to pursue them. We do so by systematically examining each stage of the risk management process, conducting a review of the relevant literature (Grant & Booth, 2009) for each stage and identifying the ‘open problems’ and relevant actors to address them. Given that different kinds of open problems may require different approaches, we have classified the identified open problems according to whether they reflect (a) a lack of scientific (or technical) consensus, (b) misalignment with or challenges to established risk management frameworks, or (c) shortcomings in implementation or application despite consensus and alignment.
By ‘open problems,’ we refer to unresolved issues concerning the processes and techniques that organisations should implement to manage AI-related risks effectively. Accordingly, the paper does not focus a priori on a predefined set of risks from AI, but rather on the organisational and procedural mechanisms through which risks are identified, assessed, and mitigated. While the analysis primarily concerns strategies available to organisations developing, deploying and integrating AI systems, it also considers the roles of other relevant actors, such as regulators, academic researchers, standards developers, and third-party auditors, insofar as they are relevant to shape or support effective risk management processes. Additionally, the classification into different types of open problems can help inform which kinds of efforts are needed to address them. However, we refrained from proposing specific solutions as they may be best formulated by situated actors. The concrete outcome of this work is a problem-oriented, agenda-setting reference document, complemented by a living repository hosted online, intended to help relevant stakeholders identify gaps, coordinate action, and collectively advance better practices.
We recognise a few caveats and limitations. While our approach is systematic, the list of problems does not aim to be exhaustive, but at best illustrative of a range of relevant problems. Many of the open problems discussed arise precisely because there has already been substantial progress in these areas such that underlying challenges are becoming visible. Consequently, areas where we have identified relatively few open problems should not be understood as being more well developed or of lower importance; but instead as areas that remain insufficiently understood and explored such that we can clearly identify and articulate the relevant challenges. We hereby use the term ‘problems’ as a useful heuristic that should not be taken to describe issues that are inherently negative nor fully solvable, but that also includes persistent challenges that need to be constantly managed, productive disagreements or differing approaches with their own advantages and disadvantages. The aim of this work is therefore to surface and clarify such issues, rather than to claim their definitive resolution.
In order for this document to encourage alignment between traditional risk management and frontier AI risk management practices and frameworks, the structure of the paper will survey, as much as possible, the open problems found following the high-level structure of existing risk management standards (ISO 31000:2018, ISO/IEC 23894:2023), informed by safety-relevant standards (ISO Guide 51:2014). The document is organised in the following sections: 1. Risk Planning, 2. Risk Identification, 3. Risk Analysis, 4. Risk Evaluation, and 5. Risk Mitigation. We leave out transversal aspects such as Communication and Consultation, Monitoring and Review, and Recording and Reporting, also presented as ‘risk governance’ in other recent framework proposals, in order to keep the scope manageable. However, we may include them in further iterations.


