Frontier AI systems – general-purpose systems capable of performing a wide range of tasks – bring a set of safety risks which risk management can help tackle. However, most AI-specific risk management standards were developed for narrow AI systems, before the advent of frontier AI. Frontier AI both amplifies existing risks and introduces qualitatively novel challenges. Not only is there a notable lack of stable scientific consensus resulting from the rapid pace of technological change, but emerging frontier AI safety practices are often misaligned with, or may undermine, established risk management frameworks. To address these challenges, we systematically surface open problems in frontier AI risk management. Adopting a problem-oriented approach, we examine each stage of the risk management process – risk planning, identification, analysis, evaluation, and mitigation – through a structured review of the literature, identifying unresolved challenges and the actors best positioned to address them. Recognising that different types of open problems call for different responses, we classify open problems according to whether they reflect (a) a lack of scientific or technical consensus, (b) misalignment with, or challenges to, established risk management frameworks, or (c) shortcomings in implementation despite apparent consensus and alignment. By mapping these open problems and identifying the actors best positioned to address them – including developers, deployers, regulators, standards bodies, researchers, and third-party evaluators – this work aims to clarify where progress is needed to enable robust and meaningful consensus on frontier AI risk management. The paper does not propose specific solutions; instead, it provides a problem-oriented, agenda-setting reference document, complemented by a living online repository, intended to support coordination, reduce duplication, and guide future research and governance efforts.