Toward Resisting AI-Enabled Authoritarianism

Toward Resisting AI-Enabled Authoritarianism

May 28, 2025

Fazl Barez, Isaac Friend, Keir Reid, Igor Krawczuk, Vincent Wang, Jakob Mökander, Philip Torr, Julia Morse and Robert Trager

View Journal Article / Working Paper >

Artificial-intelligence systems built with statistical machine learning have become the operating system of contemporary surveillance and information control, spanning both physical and online spaces. City-scale face-recognition grids, real-time social-media takedown engines and predictive “pre-crime” dashboards share four politically relevant technical features: massive data ingestion, black-box inference, automated decision-making, and no human in the loop. These features now amplify authoritarian power and erode liberal-democratic norms across many political regimes.

Yet mainstream machine learning research still devotes only limited attention to technical safeguards such as differential privacy, federated-learning security and large-model interpretability, or adversarial methods that can help the public resist AI-enhanced domination.

We identify four resulting gaps: evidence (little empirical measurement of safeguard deployment), capability (open problems such as billion-parameter privacy–utility trade-offs, causal explanations for multimodal models and Byzantine-resilient federated learning), deployment (public-sector AI systems almost never ship with safeguards enabled by default) and asymmetry (authoritarian actors already enjoy a “power surplus,” so even incremental defensive advances matter).

We propose re-directing the field toward a triad of safeguards—privacy preservation, formal interpretability and adversarial user tooling—and outline concrete research directions that fit within standard ML practice. Shifting community priorities toward Explainable-by-Design, Privacy-by-Default is a pre-condition for any durable defense of liberal democracy.

Image for In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?

In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?

May 23, 2025
Image for Proposal for UN Independent Scientific Panel on AI: Balancing Rigor and Legitimacy

Proposal for UN Independent Scientific Panel on AI: Balancing Rigor and Legitimacy

May 2, 2025