EU AI Act Risk Categories: Mapping to Real-World AI Threats
Last updated: 2026-03-12
Why This Mapping Exists
The EU Artificial Intelligence Act is the world’s first comprehensive AI regulation. Its risk-based classification system determines which AI systems face binding requirements, with penalties for non-compliance of up to 7% of global annual turnover.
However, the AI Act classifies AI systems by use case and risk level, not by threat type. It tells you what your AI system must comply with, but not which specific threats it faces or what has gone wrong in comparable systems. This mapping bridges that gap by connecting each AI Act risk category to the TopAIThreats threat patterns that are most relevant, linking to documented incidents, and tracing the causal factors that enable regulatory violations.
This page references the EU AI Act as published in the Official Journal of the European Union, with phased enforcement from February 2025 through August 2026. For a comparison between the AI Act and other frameworks, see AI Governance Frameworks Compared.
AI Act Risk Tiers Mapped to TopAIThreats
Tier 1: Prohibited AI Practices (Article 5)
AI Act requirements: These AI practices are banned outright in the EU, effective February 2, 2025. Non-compliance carries the highest penalties — up to 35 million EUR or 7% of global annual turnover.
5(1)(a) — Subliminal Manipulation
AI Act prohibition: AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort behavior in a manner that causes or is reasonably likely to cause physical or psychological harm.
TopAIThreats mapping:
- Primary pattern: Deceptive or Manipulative Interfaces (DOM-CTL) — AI systems designed to manipulate user behavior without informed consent
- Related patterns: Implicit Authority Transfer (DOM-CTL) — users unconsciously deferring decisions to AI systems
- Causal factors: Social Engineering, Over-Automation
Threat context: Subliminal manipulation by AI extends beyond traditional dark patterns. LLM-based systems can dynamically adapt persuasion strategies based on user responses, creating a level of manipulation sophistication that static interface design cannot achieve. TopAIThreats documents cases where AI systems personalized manipulative content at a scale and granularity that prior technologies did not support.
5(1)(b) — Exploitation of Vulnerabilities
AI Act prohibition: AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation to materially distort behavior in a harmful manner.
TopAIThreats mapping:
- Primary pattern: Proxy Discrimination (DOM-SOC) — AI systems that disadvantage protected groups through indirect data proxies
- Related patterns: Allocational Harm (DOM-SOC) — AI systems that distribute resources unfairly to vulnerable populations
- Causal factors: Training Data Bias, Accountability Vacuum
Threat context: The exploitation of vulnerabilities extends beyond intentional targeting. AI systems trained on population-level data can inadvertently learn to exploit cognitive vulnerabilities of specific demographics — elderly users, children, or economically disadvantaged groups — even without explicit design intent. The causal factor analysis shows that training data bias is the primary enabler, not deliberate exploitation.
5(1)(c) — Social Scoring
AI Act prohibition: AI systems by public authorities that evaluate or classify natural persons based on social behavior or personal characteristics, leading to detrimental treatment disproportionate to the context.
TopAIThreats mapping:
- Primary pattern: Behavioral Profiling Without Consent (DOM-PRI) — systematic collection and scoring of individual behavior patterns
- Related patterns: Mass Surveillance Amplification (DOM-PRI) — AI enabling population-scale behavioral monitoring, Allocational Harm (DOM-SOC) — scoring-based resource allocation
- Causal factors: Regulatory Gap, Accountability Vacuum
5(1)(d) — Real-Time Remote Biometric Identification
AI Act prohibition: Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, with limited exceptions for serious crime, missing persons, and imminent threats.
TopAIThreats mapping:
- Primary pattern: Biometric Exploitation (DOM-PRI) — misuse of biometric data including facial recognition, voice, and gait analysis
- Related patterns: Mass Surveillance Amplification (DOM-PRI) — real-time biometric ID as a component of mass surveillance infrastructure
- Causal factors: Regulatory Gap, Weaponization
Threat context: Real-time biometric identification in public spaces represents the convergence of two TopAIThreats domains — Privacy (the surveillance mechanism) and Discrimination (differential accuracy across demographic groups). For example, independent studies have reported facial recognition error rates up to 34% for darker-skinned women versus less than 1% for lighter-skinned men, implying discriminatory enforcement patterns even when systems operate as intended.
Tier 2: High-Risk AI Systems (Articles 6-49)
AI Act requirements: High-risk systems must implement risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity measures. Third-party conformity assessments required for some categories. Effective August 2, 2026.
Annex III Category 1 — Biometrics
High-risk use cases: Remote biometric identification (non-real-time), biometric categorization, emotion recognition in workplace and education.
TopAIThreats mapping:
- Primary patterns: Biometric Exploitation (DOM-PRI), Sensitive Attribute Inference (DOM-PRI)
- Causal factors: Training Data Bias, Model Opacity
Annex III Category 2 — Critical Infrastructure
High-risk use cases: AI systems used as safety components in management and operation of critical digital infrastructure, road traffic, water, gas, heating, and electricity supply.
TopAIThreats mapping:
- Primary patterns: Infrastructure Dependency Collapse (DOM-SYS), Decision Loop Automation (DOM-ECO)
- Related patterns: Unsafe Human-in-the-Loop Failures (DOM-CTL) — where human oversight of critical AI fails
- Causal factors: Over-Automation, Misconfigured Deployment
Threat context: AI failures in critical infrastructure cascade. TopAIThreats documents how AI-managed infrastructure creates dependency chains where a single model failure can trigger cascading outages across interconnected systems — a systemic risk (DOM-SYS) that the AI Act’s system-level assessment requirements aim to prevent.
Annex III Category 3 — Education and Vocational Training
High-risk use cases: AI systems that determine access to education, evaluate learning outcomes, assess appropriate level of education, or monitor prohibited behavior during tests.
TopAIThreats mapping:
- Primary patterns: Allocational Harm (DOM-SOC), Data Imbalance Bias (DOM-SOC)
- Related patterns: Overreliance & Automation Bias (DOM-CTL) — educators over-trusting AI assessment
- Causal factors: Training Data Bias, Accountability Vacuum
Annex III Category 4 — Employment and Worker Management
High-risk use cases: AI for recruitment, screening, selection, performance evaluation, promotion decisions, task allocation, and monitoring.
TopAIThreats mapping:
- Primary patterns: Proxy Discrimination (DOM-SOC), Allocational Harm (DOM-SOC)
- Related patterns: Behavioral Profiling Without Consent (DOM-PRI) — workplace monitoring, Automation-Induced Job Degradation (DOM-ECO) — AI systems that degrade job quality
- Causal factors: Training Data Bias, Model Opacity, Accountability Vacuum
Threat context: Employment AI is among the most documented sources of discrimination risk. TopAIThreats links to incidents where resume screening tools penalized names associated with specific demographics, interview scoring systems rated candidates differently based on accent, and performance monitoring AI disproportionately flagged workers in certain roles. The AI Act’s high-risk classification for employment AI directly addresses these documented threat patterns.
Annex III Category 5 — Essential Services
High-risk use cases: AI for credit scoring, insurance pricing, evaluating eligibility for public assistance and benefits.
TopAIThreats mapping:
- Primary patterns: Allocational Harm (DOM-SOC), Proxy Discrimination (DOM-SOC)
- Related patterns: Economic Dependency on Black-Box Systems (DOM-ECO) — institutions relying on opaque AI for critical allocation decisions
- Causal factors: Model Opacity, Training Data Bias
Annex III Category 6 — Law Enforcement
High-risk use cases: AI for risk assessment of natural persons, polygraphs, evidence evaluation, crime prediction, and profiling.
TopAIThreats mapping:
- Primary patterns: Mass Surveillance Amplification (DOM-PRI), Proxy Discrimination (DOM-SOC)
- Related patterns: Overreliance & Automation Bias (DOM-CTL) — law enforcement over-trusting AI risk assessments
- Causal factors: Training Data Bias, Accountability Vacuum, Regulatory Gap
Threat context: Predictive policing and risk assessment tools have been among the most documented sources of AI discrimination harm. TopAIThreats incident records show recidivism prediction tools with significantly higher false positive rates for Black defendants compared to white defendants — a proxy discrimination pattern that the AI Act’s high-risk requirements for law enforcement AI specifically target.
Annex III Category 7 — Migration, Asylum, and Border Control
High-risk use cases: AI for risk assessment at borders, examination of asylum applications, and monitoring of migration.
TopAIThreats mapping:
- Primary patterns: Biometric Exploitation (DOM-PRI), Allocational Harm (DOM-SOC)
- Causal factors: Training Data Bias, Regulatory Gap
Annex III Category 8 — Administration of Justice
High-risk use cases: AI assisting judicial authorities in researching and interpreting facts and law, and applying the law to concrete facts.
TopAIThreats mapping:
- Primary patterns: Loss of Human Agency (DOM-CTL), Misinformation & Hallucinated Content (DOM-INF)
- Related patterns: Overreliance & Automation Bias (DOM-CTL) — judges deferring to AI analysis
- Causal factors: Hallucination Tendency, Over-Automation
Threat context: AI in judicial contexts creates unique risks at the intersection of hallucination and authority. TopAIThreats documents cases where LLM-based legal research tools generated fabricated case citations that were submitted to courts — a hallucination tendency (DOM-INF) that becomes a justice-system integrity threat (DOM-CTL) when deployed in high-stakes judicial environments.
Tier 3: Limited Risk (Articles 50-52)
AI Act requirements: Transparency obligations — users must be informed they are interacting with AI, and synthetic content must be labeled. Effective August 2, 2025.
TopAIThreats mapping:
- Chatbots and conversational AI: Deceptive or Manipulative Interfaces (DOM-CTL), Implicit Authority Transfer (DOM-CTL)
- Deepfake and synthetic content: Deepfake Identity Hijacking (DOM-INF), Synthetic Media Manipulation (DOM-INF)
- Emotion recognition: Sensitive Attribute Inference (DOM-PRI)
- Causal factors: Social Engineering, Intentional Fraud
Threat context: The AI Act’s transparency requirements address the enabling conditions for multiple TopAIThreats patterns. When users do not know they are interacting with AI, they are more susceptible to manipulation (DOM-CTL), more likely to trust hallucinated outputs (DOM-INF), and less able to exercise meaningful consent over data processing (DOM-PRI).
Tier 4: General-Purpose AI (Articles 51-56)
AI Act requirements: GPAI providers must maintain technical documentation, comply with copyright law, and publish training content summaries. Effective August 2, 2025. The systemic-risk designation — applied to models trained with more than 10²⁵ FLOPs — activates additional obligations around adversarial testing, incident reporting, cybersecurity measures, and energy consumption reporting.
TopAIThreats mapping for all GPAI:
- Copyright and training data: Data Imbalance Bias (DOM-SOC) — training data composition affects output quality and fairness
- Technical documentation: Model Opacity — the causal factor that documentation requirements aim to reduce
- Causal factors: Model Opacity, Insufficient Safety Testing
TopAIThreats mapping for systemic risk GPAI:
- Adversarial testing: Adversarial Evasion (DOM-SEC), Data Poisoning (DOM-SEC)
- Incident reporting: All 15 causal factors — incident classification requires root cause analysis
- Cybersecurity: AI-Morphed Malware (DOM-SEC), Automated Vulnerability Discovery (DOM-SEC)
- Systemic risk assessment: Infrastructure Dependency Collapse (DOM-SYS), Accumulative Risk & Trust Erosion (DOM-SYS)
- Causal factors: Adversarial Attack, Competitive Pressure, Weaponization
Threat context: The GPAI tier is where the AI Act most directly addresses foundation model risks. TopAIThreats documents how competitive pressure between frontier model providers creates a race dynamic that the systemic risk provisions aim to counteract — requiring safety testing and incident reporting regardless of competitive timelines.
Cross-Reference Summary
| AI Act Category | Primary TopAIThreats Domain | Key Patterns | Key Causal Factor |
|---|---|---|---|
| Prohibited: Subliminal manipulation | Human–AI Control | Deceptive Interfaces, Implicit Authority Transfer | Social Engineering |
| Prohibited: Vulnerability exploitation | Discrimination & Social Harm | Proxy Discrimination, Allocational Harm | Training Data Bias |
| Prohibited: Social scoring | Privacy & Surveillance | Behavioral Profiling, Mass Surveillance | Regulatory Gap |
| Prohibited: Real-time biometrics | Privacy & Surveillance | Biometric Exploitation, Mass Surveillance | Weaponization |
| High-risk: Biometrics | Privacy & Surveillance | Biometric Exploitation, Sensitive Attribute Inference | Training Data Bias |
| High-risk: Critical infrastructure | Systemic & Catastrophic | Infrastructure Dependency Collapse | Over-Automation |
| High-risk: Education | Discrimination & Social Harm | Allocational Harm, Data Imbalance Bias | Training Data Bias |
| High-risk: Employment | Discrimination & Social Harm | Proxy Discrimination, Allocational Harm | Training Data Bias |
| High-risk: Essential services | Discrimination & Social Harm | Allocational Harm, Proxy Discrimination | Model Opacity |
| High-risk: Law enforcement | Privacy & Surveillance | Mass Surveillance, Proxy Discrimination | Training Data Bias |
| High-risk: Migration | Privacy & Surveillance | Biometric Exploitation, Allocational Harm | Training Data Bias |
| High-risk: Justice | Human–AI Control | Loss of Human Agency, Misinformation | Hallucination Tendency |
| Limited risk: Transparency | Human–AI Control | Deceptive or Manipulative Interfaces, Deepfake Identity Hijacking | Social Engineering |
| GPAI: Systemic Risk | Security & Cyber; Systemic & Catastrophic | Adversarial Evasion, Infrastructure Dependency Collapse | Competitive Pressure |
Key insight: The AI Act’s risk classification maps most strongly to three TopAIThreats domains (see the full taxonomy for all 8 domains and 42 patterns) — Discrimination & Social Harm (DOM-SOC), Privacy & Surveillance (DOM-PRI), and Human–AI Control (DOM-CTL). These three domains account for the majority of high-risk and prohibited AI practices. Security risks (DOM-SEC) appear primarily in the GPAI systemic risk provisions. Economic risks (DOM-ECO), Agentic risks (DOM-AGT), and Information Integrity risks (DOM-INF) are less directly addressed — representing regulatory gaps that organizations should fill through complementary governance frameworks.
How to Use This Mapping
For compliance teams: Use this mapping to understand which TopAIThreats patterns apply to your AI Act risk classification. When conducting the required risk assessment for high-risk systems, this page identifies the specific threat patterns and causal factors to evaluate.
For risk managers: Use this mapping to connect regulatory compliance to operational risk management. The EU AI Act tells you what is required; TopAIThreats tells you what specific threats those requirements are designed to prevent.
For legal teams: Use this mapping to inform AI Act conformity assessments. When documenting that a high-risk system meets robustness requirements, reference the specific adversarial threat patterns that your mitigations address.
For executive leadership: Use the cross-reference summary to understand which business functions are most affected by AI Act requirements and which threat domains create the highest regulatory exposure.
Methodology Note
This mapping references the EU Artificial Intelligence Act as published in the Official Journal of the European Union (Regulation (EU) 2024/1689) and the TopAIThreats taxonomy v3.0. Risk category to threat pattern mappings are maintained by the TopAIThreats editorial team and updated when the AI Act implementing guidance or the taxonomy is revised. This is not an official EU publication — it is an independent cross-reference maintained by TopAIThreats. If you believe this mapping contains inaccuracies, contact us for correction.