Spring greetings from Oxford!
The AIGI team has had a productive start to the season, marked by milestones in research, policy engagement, and team growth. In March, we launched the much-anticipated Technical AI Governance (TAIG) programme, expanding our research by supporting DPhil students in engineering and computer science. In February, we played a key role in the French AI Action Summit, publishing reports and contributing to vital discussions on AI governance in an increasingly complex geopolitical landscape. We also welcomed Fazl Barez as our new Senior Postdoctoral Research Fellow. He will lead our work on AI safety and interpretability.
Technical AI Governance Programme
In collaboration with the Department of Engineering Science, we are launching a new programme aimed at cultivating a cohort of researchers with strong backgrounds in engineering and computer science, alongside a deep understanding of the societal, legal, and ethical impacts of AI. We plan to support 4–8 DPhil students each year, host two conferences annually, and establish ourselves as a hub for technical AI governance. We are currently in the final stages of accepting new DPhil students, and we will soon launch a new recruitment round for a postdoctoral researcher.
New website is live
Our brand-new website, aigi.ox.ac.uk, is now live! Stay up to date with our latest research, events, and activities. Have a look and see what we’ve been working on.
Activties
AIGI in Paris
In Paris, we launched the “Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities” report in partnership with the Carnegie Endowment for International Peace, Tsinghua University and Concordia AI. Our affiliates Kayla Blomquist and Elisabeth Siegel presented the report at a side event where key stakeholders explored ways to strengthen collective action to ensure the safety of increasingly advanced AI systems while accounting for geopolitical and economic challenges.
Our Director, Robert Trager, took part in several panel discussions alongside high-profile AI experts, including Jade Leung, Chief Technical Officer at the UK AI Safety Institute; Karine Perset, Acting Head of the OECD Division on AI and Emerging Digital Technologies; Prof. Xue Lan, Dean of the Institute for AI International Governance at Tsinghua University; and Lee Wan Sie, Director of Development for Data-Driven Technology at the Info-communications Media Development Authority (IMDA) of Singapore.
At AIGI, we spent time reflecting on the future of the summit series. Our researchers put forward bold reforms to keep future summits more focused and agile in our recent report The Future of the AI Summit Series. Our central recommendation is to maintain the series’ focus on advanced AI governance, as no other forum specifically tackles frontier AI systems. We propose a two‐track model, with one track dedicated to high-stakes AI safety and governance and a second for broader public interest issues. Read our director Robert Trager’s X thread for highlights.
Henry de Zoete, our senior advisor and a co-author of the report, shared his reflections following the Paris Summit. Until July 2024 he was the Prime Minister’s advisor on AI. He led the UK’s approach on AI including the Bletchley Park AI Safety Summit and setting up the UK’s AI Safety Institute.
“We can’t wait 12 months. The cycle of new model releases has sped up to quarterly rather than yearly. The world could look very different in a year. We need summits every 6 months even if they are smaller or have a virtual component like in Seoul. They are vital for the international community to keep up with the pace of A.I. The exponential isn’t slowing down any time soon!”
AIGI at the UN Internet Governance Forum
AIGI Senior Advisor Sam Daws spoke in Riyadh, Saudi Arabia in December on mechanisms for AI interoperability at the UN Internet Governance Forum. He outlined geopolitical and technical obstacles to interoperability for AI safety and security collaboration, and in measuring, tracking and incentivising the energy footprint of AI. He advocated for cultural diversity in AI design, and the need for trans-regional ‘cultural interoperability’ as ‘Sovereign AI’ grows. Finally, he identified key opportunities for international alignment and confidence-building through the four UN post-GDC science, policy, standards, and capacity-building pathways, and through diverse minilateral forums such as APEC, BRICS+, CICA, and GPAI.
Contributing to the US AI Action Plan
We recently submitted a comment in response to a request for information on the development of a U.S. AI Action Plan, which is set to become a cornerstone of the new administration’s emerging AI strategy. Our submission proposes a targeted framework to strengthen U.S. AI leadership while addressing security risks—focusing on strategic investments in infrastructure and compute, capability-based governance approaches, and effective management of access to strategic AI capabilities.
Multi-stakeholder Convenings
We held three convenings of experts from government, industry, civil society, and academia to address pressing issues in AI governance.
The first convening focused on defining a gold standard for the use of risk tiers in managing advanced AI systems. It drew on, and fed into, processes such as the EU AI Act General Purpose AI Code of Practice and the safety frameworks of leading AI companies. Developing a gold standard is a long and rigorous process, but as researchers and evaluators begin to predict that risks from the misuse of advanced AI may materialise this year or next, it is critical to establish best practices before serious harms emerge.
The second convening, held in collaboration with the Berkman Klein Center for the Internet & Society at Harvard University, explored the future of open-source frontier AI governance. Open-source AI offers many benefits, but also presents particular governance challenges and risks due to its diffusion. The convening aimed to build consensus across the open-source spectrum on how to design a future where such risks are mitigated, enabling the benefits to be realised. Despite lively and diverse perspectives, common ground was found.
The third convening aimed to map key components of frontier safety frameworks to established risk management standards—identifying gaps and seeking alignment in both terminology and underlying processes.Through this effort, we aim to support a more cohesive and effective approach to AI risk management, integrating insights from both the AI safety and standards communities. A report will be produced summarising the key discussions.
Welcome to the team Fazl!
We are excited to welcome Fazl Barez, who joins AIGI as a Senior Postdoctoral Research Fellow. He will be leading our research initiatives on AI safety and interpretability. Fazl is a well-established AI safety researcher, collaborating with leading academic institutions, AGI labs, and government organisations. He holds research affiliations with the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, the Digital Trust Centre at Nanyang Technological University, and the School of Informatics at the University of Edinburgh. As a member of ELLIS, he contributes to advancing AI safety research across Europe.
AIGI also welcomes new visiting fellows
Joslyn is a Senior Research Scientist at Google DeepMind working on long-term strategy and AI governance. Prior to Google DeepMind, she was Associate Professor of Government at Wesleyan University and faculty at University of California Santa Barbara focusing on international relations.
Nitarshan is a PhD candidate researching AI at the University of Cambridge, and a Vice-Chair leading the drafting of the EU’s General-Purpose AI Code of Practice. He was previously Senior Policy Adviser to the UK Secretary of State for Science, Innovation and Technology, a role in which he co-founded the AI Security Institute and co-created the AI Safety Summit and the UK’s supercomputing programme.
… and new research affiliates
Kwan Yee Ng, Senior Program Manager at Concordia AI, a Beijing-based social enterprise focused on AI safety and governance
Amro Awad, Associate Professor in Electrical Engineering at the University of Oxford
James Oldfield, PhD student in machine learning at Queen Mary University of London
Shannon Hong, Research Fellow at the Oxford China Policy Lab; and a current MBA candidate at the Oxford Saïd School of Business
George Yin, Senior Research Fellow and Deputy Director of the World and China Program at the Center for China Studies, National Taiwan University (NTUCCS)
Renan Araujo, Research Manager at the Institute for AI Policy and Strategy
Research Recap
Looking ahead: Synergies between the EU AI Office and UK AISI
How can the UK and EU enhance AI security while respecting their distinct mandates? This policy brief explores strategic alignment between the UK AI Security Institute (AISI) and the European AI Office, using a four-tier framework – Collaboration, Coordination, Communication, and Separation – to maximise impact while maintaining autonomy.
The report – produced in collaboration with Demos and supported by the Government of the Republic of Korea – examines the evolving digital rights landscape in the UK, the EU, and beyond. Through a literature review, expert interviews, a roundtable, and a policy workshop with digital rights organizations, academics, and policymakers, it assesses the current challenges and new approaches to advancing digital rights governance.
Promising Topics for US–China Dialogues on AI Safety and Governance
In this report, researchers develop recommendations for AI governance and safety dialogue topics from one specific angle: identifying topics on which there is significant common ground between the US and China, through a comprehensive analysis of over 40 key primary AI policy and corporate governance documents from the two countries. They analyze these areas of common ground in the context of US–China relations, setting aside topics that would be deemed too sensitive, or linked to existing tensions between the two countries.
Who Should Develop Which AI Evaluations?
Researchers explore frameworks and criteria for determining which actors (e.g., government agencies, AI companies, third-party organisations) are best suited to develop AI model evaluations. Key challenges include conflicts of interest when AI companies assess their own models, the information and skill requirements for AI evaluations and the (sometimes) blurred boundary between developing and conducting evaluations.
Voluntary Industry Initiatives in Frontier AI Governance: Lessons from Aviation and Nuclear Power
As voluntary, industry-led safety initiatives take on an increasingly central role in strategies to reduce risks from frontier AI systems, they deserve greater scrutiny. This paper addresses this need for scrutiny by examining two industries—aviation and nuclear power—where human safety is paramount and industry consortia have been key to improving safety outcomes.
Looking ahead…
We are curating a series of sessions with the International Telecommunication Union (ITU) for this year’s AI for Good Summit, taking place from 8–11 July 2025 in Geneva, Switzerland.Register here.
We are also preparing to launch the first AI knowledge-sharing platform through the Secure and Trustworthy AI Knowledge-Sharing Initiative, with the support of the ITU, in May. This joint initiative aims to advance international coordination on AI security, trustworthiness, innovation, and inclusivity. It also addresses the challenge of mapping and supporting different countries’ institutional needs and solutions in AI governance.
Subscribe to our AIGI Newsletter!