His research focuses on the security, oversight, and governance of increasingly autonomous and interconnected AI agents. He is particularly interested in agent infrastructure and the risks that emerge from multi-agent interactions. This includes challenges related to AI-AI negotiation, scalable oversight, and the benchmarking of agentic behavior.
Before beginning his DPhil, Chandler was a Research Engineer at the Cooperative AI Foundation, where he contributed to technical research on multi-agent systems. He received a Foresight Institute AI Safety grant to explore topics in multi-agent security, steganography, and AI control, and consulted with IQT’s applied research teams on AI, multi-agent systems, and AI infrastructure. He was also a scholar in the ML Alignment & Theory Scholars (MATS) program, working with Jesse Clifton on research in multi-agent safety. Prior to that, he worked as an engineer at Dimagi, where he contributed to global health and COVID-19 response initiatives. Visit his website for more information.