Mapping Industry Practices to the EU AI Act’s GPAI Code of Practice Safety and Security Measures

Mapping Industry Practices to the EU AI Act’s GPAI Code of Practice Safety and Security Measures

July 11, 2025

Lily Stelling, Mick Yang, Rokas Gipiškis, Leon Staufer, Ze Shen Chin, Siméon Campos, Ariel Gil, Michael Chen

View Journal Article / Working Paper >

This report provides a detailed comparison between the Safety and Security measures proposed in the EU AI Act’s General-Purpose AI (GPAI) Code of Practice (Third Draft) and the current commitments and practices voluntarily adopted by leading AI companies. As the EU moves toward enforcing binding obligations for GPAI model providers, the Code of Practice will be key for bridging legal requirements with concrete technical commitments. Our analysis focuses on the draft’s Safety and Security section (Commitments II.1–II.16), documenting excerpts from current public-facing documents that are relevant to each individual measure.

We systematically reviewed different document types, such as companies’ frontier safety frameworks and model cards, from over a dozen companies, including OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, Amazon, and others. This report is not meant to be an indication of legal compliance, nor does it take any prescriptive viewpoint about the Code of Practice or companies’ policies. Instead, it aims to inform the ongoing dialogue between regulators and General-Purpose AI model providers by surfacing evidence of industry precedent for various measures. Nonetheless, we were able to find relevant quotes from at least 5 companies’ documents for the majority of the measures in Commitments II.1–II.16.

 

Image for Verification for International AI Governance

Verification for International AI Governance

July 3, 2025
Image for Risk Tiers: Towards a Gold Standard for Advanced AI

Risk Tiers: Towards a Gold Standard for Advanced AI

June 16, 2025