Skip to main content
TopAIThreats home TOP AI THREATS

Threat Taxonomy

TopAIThreats classifies AI-enabled threats using a multi-dimensional taxonomy. The core hierarchy — domains, patterns, and incidents — is complemented by four stakeholder dimensions (affected groups, exposure pathways, ecosystem positions, and impact level) and six analytical dimensions (causal factors, harm types, assets, attack lifecycle, governance frameworks, and risk levels).

Core hierarchy: Domain → Pattern → Incident

Domains: 8 | Patterns: 48 | Exposure Pathways: 5 | Causal Factors: 15 | Assets: 12 | Harm Types: 7 | Frameworks: 3

Machine-readable: /api/threats.json

Threats caused by AI systems that act independently, persist over time, or coordinate with other systems.

MIT: Multi-agent risks | EU AI Act: Systemic & autonomy risks (emerging)

Threats arising from how humans rely on, defer to, or lose control over AI systems.

MIT: Human-Computer Interaction | EU AI Act: Transparency & oversight requirements

Threats that distort markets, labor conditions, or the distribution of economic power.

MIT: Socioeconomic | EU AI Act: Market fairness, systemic risk

Threats that undermine the reliability, authenticity, or shared understanding of information.

MIT: Misinformation | EU AI Act: Manipulation, democratic harm

Threats involving unauthorized inference, tracking, or monitoring of individuals or groups.

MIT: Privacy & Security | EU AI Act: Fundamental rights, GDPR

AI-enabled attacks that compromise the integrity, confidentiality, or availability of digital systems — through input manipulation, model exploitation, or automated offense.

MIT: Privacy & Security | EU AI Act: Cybersecurity & Robustness

Threats that result in unfair treatment, exclusion, or social harm to individuals or groups.

MIT: Discrimination & Toxicity | EU AI Act: High-risk systems (employment, credit, education)

Threats that emerge from scale, coupling, and accumulation rather than single failures.

MIT: Long-term / existential | EU AI Act: Systemic risk framing (2026+)

Stakeholder Dimensions

Four dimensions capture who is affected, how they are exposed, who is responsible, and the scale of impact.

Who is directly harmed by AI-enabled threats, organized into three categories.

Individuals

Organizations

Systems

Individuals

general-public, workers, children, vulnerable-communities

Organizations

business-organizations, government-institutions, critical-infrastructure-operators, developers-ai-builders

Systems

democratic-institutions, national-security-systems, society-at-large

How individuals and organizations encounter AI-enabled threats.

TAX-ECO

Ecosystem Positions

Who caused, enabled, or failed to prevent an AI-enabled threat.

Developers & Providers (developers-providers)
Deployers & Operators (deployers-operators)
Regulators & Public Servants (regulators-public-servants)
Organizational Leaders (organizational-leaders)
Direct Users (direct-users)
Indirectly Affected (indirectly-affected)
TAX-IMP

Impact Level

Scale of harm from an individual incident.

Individual (individual)
Organization (organization)
Sector (sector)
Institution (institution)
Society-Wide (society-wide)
Global (global)

Analytical Taxonomy Dimensions

Additional dimensions classify contributing factors, technologies involved, harm categories, governance frameworks, and risk levels.

TAX-HARM

Harm Types

Categories of harm that AI-enabled threats can inflict on individuals, organizations, and society.

Regulatory and governance frameworks relevant to AI threat mitigation and compliance.

Contributing factors that enabled or amplified AI-enabled threats, across four categories.

AI technologies and data types involved in documented threat incidents.

TAX-RISK

Risk Levels

Severity, likelihood, and reversibility assessments applied to incidents and patterns.

Severity

Critical, High, Medium, Low — applied to patterns and incidents

Reversibility

Reversible, Partially Reversible, Irreversible — recoverability of harm