Increasing risks from advanced AI demand effective risk management systems tailored to this rapidly changing technology. One key part of risk management is establishing risk tiers. Risk tiers are categories based on expected harm that specify in advance which mitigations and responses will be applied to systems of different risk levels. Risk tiers force AI companies to identify potential risks from their systems and plan appropriate responses. They also provide public transparency regarding the risk level society is accepting from AI and how those risks are being managed.
Risk management, including risk tiering, has received attention from both policymakers and industry, but different organizations have taken divergent and sometimes incompatible approaches. This diversity has facilitated innovation and experimentation in adapting risk management to the challenges of advanced AI. However, it has also made it difficult to understand the overall risk picture and how each system and developer contributes to it, as well as to compare the effectiveness of different risk estimation and mitigation practices. As such, a more standardized approach to risk tiering–one that achieves the benefits of effective aggregation, comparison, and consistent scientific grounding while preserving space for innovation–is needed.
To explore such standardization, the Oxford Martin AI Governance Initiative (AIGI) convened experts from government, industry, academia, and civil society to lay the foundation for a gold standard for advanced AI risk tiers. A complete gold standard will require further work. However, the convening provided insights for how risk tiers might be adapted to advanced AI while also establishing a framework for broader standardization efforts.