The AI Safety Institutes (AISIs) – and other institutions that may play similar roles, such as the Chinese AI Safety Network and Unit A3 of the EU AI Office – are increasingly important actors in the governance of advanced AI. The Oxford Martin AI Governance Initiative (AIGI) recently convened a small expert workshop – held under the Chatham House rule – to explore the role that AISIs could play both domestically and internationally. Attendees suggested a wide variety of functions AISIs can fulfil, including roles as evaluators of models, developers of standards, and coordinators of third parties, and noted that a network of AISIs might coordinate these efforts internationally. This memo summarises the key takeaways, questions, and tradeoffs which emerged from the workshop about AISIs’ structure and functions. These include: conducting evaluations, participation in standards development, coordination with regulatory bodies, classified material and information-sharing, ability to coordinate internationally, the potential formation of regional AISIs, and AISI contributions to the international scientific report on AI safety.