AI Threat Domains
The AI Threat Domains framework is a harm-based taxonomy that classifies AI-enabled risks into 8 domains and 48 empirically grounded threat patterns to support regulatory, policy, and operational risk assessment.
All Domains
Agentic & Autonomous Threats
Threats caused by AI systems that act independently, persist over time, or coordinate with other systems.
7 threat patterns
Human–AI Control Threats
Threats arising from how humans rely on, defer to, or lose control over AI systems.
5 threat patterns
Economic & Labor Threats
Threats that distort markets, labor conditions, or the distribution of economic power.
5 threat patterns
Information Integrity Threats
Threats that undermine the reliability, authenticity, or shared understanding of information.
6 threat patterns
Privacy & Surveillance Threats
Threats involving unauthorized inference, tracking, or monitoring of individuals or groups.
5 threat patterns
Security & Cyber Threats
AI-enabled attacks that compromise the integrity, confidentiality, or availability of digital systems — through input manipulation, model exploitation, or automated offense.
9 threat patterns
Discrimination & Social Harm
Threats that result in unfair treatment, exclusion, or social harm to individuals or groups.
5 threat patterns
Systemic & Catastrophic Risks
Threats that emerge from scale, coupling, and accumulation rather than single failures.
6 threat patterns
Framework Overview
How this taxonomy is structured
This taxonomy organises AI-enabled threats by their observable impact, rather than by underlying technical mechanisms.
- Threats are grouped into 8 domains, each representing a distinct category of harm
- Each domain contains 4–7 threat patterns describing concrete harm mechanisms
- Domains are analytically distinct but not strictly exclusive
- Cross-domain overlaps are explicitly referenced while maintaining a primary classification
How Domains Interrelate
AI-enabled threats commonly span multiple domains rather than occurring in isolation. Each incident is assigned a primary domain based on its dominant harm, with secondary domain intersections explicitly noted.
Example: A deepfake used for financial fraud is primarily classified under Information Integrity, while intersecting with Security & Cyber and Privacy & Surveillance.
This approach preserves analytical clarity while accurately reflecting how AI-related harms manifest across real-world contexts.
Methodology
Threat patterns are identified through systematic review of academic literature, regulatory filings, incident databases, and primary source reporting. Each pattern undergoes evidence-level assessment before inclusion.
Last reviewed: 2026-03-20 · 8 domains · 48 patterns · View full taxonomy