Dear Colleagues,
As 2024 draws to a close, we are excited to reflect on a busy and fruitful year at the Oxford Martin AI Governance Initiative. Our researchers and affiliates around the globe have advanced the field of AI governance through pioneering impactful research and active policy engagement. We have brought together key stakeholders for critical discussions to address the most pressing AI challenges, while fostering meaningful connections across disciplines and geographies.
Leadership in setting agenda in Technical AI Governance
We have taken a leadership role in defining and resourcing the field of technical AI governance. We are advising students in engineering and computer science focused on AI governance and expect to fund seven DPhils per year starting in the fall. We are also collaborating with faculty across the technical and social sciences.
Our agenda-setting paper, Open Problems in Technical AI Governance outlines the scope, importance, and challenges of technical AI governance, providing a taxonomy and an initial catalogue of unresolved issues. The paper has become a valuable resource for technical researchers and funders interested in contributing to AI governance.
Our researchers are examining technical verification methods, including but not limited to on chip mechanisms, researching model access requirements for evaluations, and demonstrating the workload classification measures that compute providers can employ in support of governance regimes. We proposed a novel regulatory approach to AI governance in Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation. The paper argues that compute providers can play an essential role in a regulatory ecosystem via four key capacities – similar to the intermediary role played by financial institutions.
We also try to answer a crucial question in the paper Who Should Run Advanced AI Evaluations and Audits? Our researchers proposed a three-step logic based on case studies of high-risk Industries.
At the AIGI, we firmly believe AI governance requires technical solutions and technically-informed policy-making. However, governments, industry and philanthropic organizations appear to be under-investing in training and research in this area. To address this need, we will be expanding our focus on our technical AI governance programme in the coming year to support high-impact research and nurture talents equipped with both policy and technical expertise. Stay tuned for more updates!
Convening key stakeholders for crucial discussions

Over the past year, we have actively contributed to the AI governance debate by hosting eight high-level online workshops with stakeholders from academia, governments, industries, and civil society. These discussions tackled pressing topics including:
- The role of AI Safety Institutes (AISIs);
- The future of AI Safety Summits;
- Establishing international scientific consensus on AI risks;
- Post-deployment monitoring of AI to mitigate risks.
We also welcomed stakeholders to Oxford for in-depth dialogues.
In partnership with the Oxford Internet Institute, we organised a full-day workshop to explore the standardisation of frontier AI systems. Together with researchers from Demos, we hosted a workshop with participants from South Korea to discuss advancing digital rights and ensuring equitable access to AI technologies. In collaboration with the Yale Law School, we hosted a Track II Dialogue, bringing together representatives from tech companies in the US, UK, and China to discuss foundation model regulations and their implementation.
Shaping policy through impactful research and engagement

This year, we published nine reports across our five key research areas: frontier AI regulation, technical AI governance, international AI governance, the social impact of emerging technologies, and China AI governance. These reports have provided critical insights for policymakers, with some of the authors seconded to governmental bodies to support related processes.
Dr Marta Ziosi, our postdoctoral researcher, was selected as one of the Vice-Chairs to draft the EU General-Purpose AI Code of Practice, which will establish rules for the implementation of the EU AI Act.
Claire Dennis, our research affiliate, joined the UK AI Safety Institute as Strategy Lead for the International AI Safety Report. She led a workshop at AIGI, in partnership with the Carnegie Endowment for International Peace, exploring ways to achieve international scientific consensus on AI risks. Read the report here: The Future of International Scientific Assessments of AI’s Risks
Leonie Koessler, our research affiliate, has been appointed as a Seconded Expert to the European AI Office, where she continues to work on GPAI regulation and risk management.
Building a collaborative, multidisciplinary community of AI researchers

At the AIGI, we are building a vibrant community of aspiring researchers committed to tackling pressing AI governance challenges.
- Our inaugural Technical AI Governance Researcher Mixer brought together nearly 100 researchers from various disciplines for an evening of networking.
- We hosted over 20 three-hour coworking sessions at the Martin School over the past year, facilitating focused work and collaboration among AI researchers and DPhil students. If you’d like to join us next term, please contact Nikki at nikki.sun@oxfordmartin.ox.ac.uk.
- Beyond coworking, we organised four community meetings where our core collaborators convene to exchange ideas and share projects.
Inspiring dialogue through distinguished voices

This year, we hosted a number of public events featuring prominent speakers, such as Lisa Monaco, the 39th Deputy Attorney General of the United States, who announced the launch of the Justice AI initiative; Other notable speakers included Juraj Čorba, Chair of the OECD Working Party on AI Governance and Jeffrey Ding, Assistant Professor at George Washington University.
Through the Oxford Martin School’s Visiting Fellow Scheme, we welcomed influential collaborators:
- Henry de Zoete, former AI adviser to the UK Prime Minister;
- Dr Allan Dafoe, Director of Frontier Safety and Governance at Google DeepMind;
- Dr Julia C. Morse, Assistant Professor at the University of California, Santa Barbara;
- Dr Naima Green-Riley, Assistant Professor at Princeton University.
A busy year for Robert…
Our Director, Professor Robert Trager, has had an exceptionally active year, delivering nearly 100 talks around the world. He represented AIGI at prominent events such as the AI Seoul Summit, where he advocated for an international reporting regime for AI developers and compute providers. Professor Trager also curated and moderated the inaugural AI Governance Day at the UN’s AI for Good Summit in Geneva. Additionally, he actively contributed to AI expert groups at the OECD, UNESCO, and UN Networks

Curious about where the key AI discussions are happening? A look at Professor Trager’s 2024 travel history might give us some clues!

Looking ahead to 2025, we are excited to continue our mission of shaping the future of AI governance in collaboration with all of you. Thank you for being an integral part of this journey!
