Skip to main content
TopAIThreats home TOP AI THREATS

EU AI Act Risk Categories: Mapping to Real-World AI Threats

Last updated: 2026-03-12

Why This Mapping Exists

The EU Artificial Intelligence Act is the world’s first comprehensive AI regulation. Its risk-based classification system determines which AI systems face binding requirements, with penalties for non-compliance of up to 7% of global annual turnover.

However, the AI Act classifies AI systems by use case and risk level, not by threat type. It tells you what your AI system must comply with, but not which specific threats it faces or what has gone wrong in comparable systems. This mapping bridges that gap by connecting each AI Act risk category to the TopAIThreats threat patterns that are most relevant, linking to documented incidents, and tracing the causal factors that enable regulatory violations.

This page references the EU AI Act as published in the Official Journal of the European Union, with phased enforcement from February 2025 through August 2026. For a comparison between the AI Act and other frameworks, see AI Governance Frameworks Compared.


AI Act Risk Tiers Mapped to TopAIThreats

Tier 1: Prohibited AI Practices (Article 5)

AI Act requirements: These AI practices are banned outright in the EU, effective February 2, 2025. Non-compliance carries the highest penalties — up to 35 million EUR or 7% of global annual turnover.

5(1)(a) — Subliminal Manipulation

AI Act prohibition: AI systems that deploy subliminal techniques beyond a person’s consciousness to materially distort behavior in a manner that causes or is reasonably likely to cause physical or psychological harm.

TopAIThreats mapping:

Threat context: Subliminal manipulation by AI extends beyond traditional dark patterns. LLM-based systems can dynamically adapt persuasion strategies based on user responses, creating a level of manipulation sophistication that static interface design cannot achieve. TopAIThreats documents cases where AI systems personalized manipulative content at a scale and granularity that prior technologies did not support.


5(1)(b) — Exploitation of Vulnerabilities

AI Act prohibition: AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation to materially distort behavior in a harmful manner.

TopAIThreats mapping:

Threat context: The exploitation of vulnerabilities extends beyond intentional targeting. AI systems trained on population-level data can inadvertently learn to exploit cognitive vulnerabilities of specific demographics — elderly users, children, or economically disadvantaged groups — even without explicit design intent. The causal factor analysis shows that training data bias is the primary enabler, not deliberate exploitation.


5(1)(c) — Social Scoring

AI Act prohibition: AI systems by public authorities that evaluate or classify natural persons based on social behavior or personal characteristics, leading to detrimental treatment disproportionate to the context.

TopAIThreats mapping:


5(1)(d) — Real-Time Remote Biometric Identification

AI Act prohibition: Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, with limited exceptions for serious crime, missing persons, and imminent threats.

TopAIThreats mapping:

Threat context: Real-time biometric identification in public spaces represents the convergence of two TopAIThreats domains — Privacy (the surveillance mechanism) and Discrimination (differential accuracy across demographic groups). For example, independent studies have reported facial recognition error rates up to 34% for darker-skinned women versus less than 1% for lighter-skinned men, implying discriminatory enforcement patterns even when systems operate as intended.


Tier 2: High-Risk AI Systems (Articles 6-49)

AI Act requirements: High-risk systems must implement risk management, data governance, technical documentation, transparency, human oversight, accuracy, robustness, and cybersecurity measures. Third-party conformity assessments required for some categories. Effective August 2, 2026.

Annex III Category 1 — Biometrics

High-risk use cases: Remote biometric identification (non-real-time), biometric categorization, emotion recognition in workplace and education.

TopAIThreats mapping:


Annex III Category 2 — Critical Infrastructure

High-risk use cases: AI systems used as safety components in management and operation of critical digital infrastructure, road traffic, water, gas, heating, and electricity supply.

TopAIThreats mapping:

Threat context: AI failures in critical infrastructure cascade. TopAIThreats documents how AI-managed infrastructure creates dependency chains where a single model failure can trigger cascading outages across interconnected systems — a systemic risk (DOM-SYS) that the AI Act’s system-level assessment requirements aim to prevent.


Annex III Category 3 — Education and Vocational Training

High-risk use cases: AI systems that determine access to education, evaluate learning outcomes, assess appropriate level of education, or monitor prohibited behavior during tests.

TopAIThreats mapping:


Annex III Category 4 — Employment and Worker Management

High-risk use cases: AI for recruitment, screening, selection, performance evaluation, promotion decisions, task allocation, and monitoring.

TopAIThreats mapping:

Threat context: Employment AI is among the most documented sources of discrimination risk. TopAIThreats links to incidents where resume screening tools penalized names associated with specific demographics, interview scoring systems rated candidates differently based on accent, and performance monitoring AI disproportionately flagged workers in certain roles. The AI Act’s high-risk classification for employment AI directly addresses these documented threat patterns.


Annex III Category 5 — Essential Services

High-risk use cases: AI for credit scoring, insurance pricing, evaluating eligibility for public assistance and benefits.

TopAIThreats mapping:


Annex III Category 6 — Law Enforcement

High-risk use cases: AI for risk assessment of natural persons, polygraphs, evidence evaluation, crime prediction, and profiling.

TopAIThreats mapping:

Threat context: Predictive policing and risk assessment tools have been among the most documented sources of AI discrimination harm. TopAIThreats incident records show recidivism prediction tools with significantly higher false positive rates for Black defendants compared to white defendants — a proxy discrimination pattern that the AI Act’s high-risk requirements for law enforcement AI specifically target.


Annex III Category 7 — Migration, Asylum, and Border Control

High-risk use cases: AI for risk assessment at borders, examination of asylum applications, and monitoring of migration.

TopAIThreats mapping:


Annex III Category 8 — Administration of Justice

High-risk use cases: AI assisting judicial authorities in researching and interpreting facts and law, and applying the law to concrete facts.

TopAIThreats mapping:

Threat context: AI in judicial contexts creates unique risks at the intersection of hallucination and authority. TopAIThreats documents cases where LLM-based legal research tools generated fabricated case citations that were submitted to courts — a hallucination tendency (DOM-INF) that becomes a justice-system integrity threat (DOM-CTL) when deployed in high-stakes judicial environments.


Tier 3: Limited Risk (Articles 50-52)

AI Act requirements: Transparency obligations — users must be informed they are interacting with AI, and synthetic content must be labeled. Effective August 2, 2025.

TopAIThreats mapping:

Threat context: The AI Act’s transparency requirements address the enabling conditions for multiple TopAIThreats patterns. When users do not know they are interacting with AI, they are more susceptible to manipulation (DOM-CTL), more likely to trust hallucinated outputs (DOM-INF), and less able to exercise meaningful consent over data processing (DOM-PRI).


Tier 4: General-Purpose AI (Articles 51-56)

AI Act requirements: GPAI providers must maintain technical documentation, comply with copyright law, and publish training content summaries. Effective August 2, 2025. The systemic-risk designation — applied to models trained with more than 10²⁵ FLOPs — activates additional obligations around adversarial testing, incident reporting, cybersecurity measures, and energy consumption reporting.

TopAIThreats mapping for all GPAI:

TopAIThreats mapping for systemic risk GPAI:

Threat context: The GPAI tier is where the AI Act most directly addresses foundation model risks. TopAIThreats documents how competitive pressure between frontier model providers creates a race dynamic that the systemic risk provisions aim to counteract — requiring safety testing and incident reporting regardless of competitive timelines.


Cross-Reference Summary

AI Act CategoryPrimary TopAIThreats DomainKey PatternsKey Causal Factor
Prohibited: Subliminal manipulationHuman–AI ControlDeceptive Interfaces, Implicit Authority TransferSocial Engineering
Prohibited: Vulnerability exploitationDiscrimination & Social HarmProxy Discrimination, Allocational HarmTraining Data Bias
Prohibited: Social scoringPrivacy & SurveillanceBehavioral Profiling, Mass SurveillanceRegulatory Gap
Prohibited: Real-time biometricsPrivacy & SurveillanceBiometric Exploitation, Mass SurveillanceWeaponization
High-risk: BiometricsPrivacy & SurveillanceBiometric Exploitation, Sensitive Attribute InferenceTraining Data Bias
High-risk: Critical infrastructureSystemic & CatastrophicInfrastructure Dependency CollapseOver-Automation
High-risk: EducationDiscrimination & Social HarmAllocational Harm, Data Imbalance BiasTraining Data Bias
High-risk: EmploymentDiscrimination & Social HarmProxy Discrimination, Allocational HarmTraining Data Bias
High-risk: Essential servicesDiscrimination & Social HarmAllocational Harm, Proxy DiscriminationModel Opacity
High-risk: Law enforcementPrivacy & SurveillanceMass Surveillance, Proxy DiscriminationTraining Data Bias
High-risk: MigrationPrivacy & SurveillanceBiometric Exploitation, Allocational HarmTraining Data Bias
High-risk: JusticeHuman–AI ControlLoss of Human Agency, MisinformationHallucination Tendency
Limited risk: TransparencyHuman–AI ControlDeceptive or Manipulative Interfaces, Deepfake Identity HijackingSocial Engineering
GPAI: Systemic RiskSecurity & Cyber; Systemic & CatastrophicAdversarial Evasion, Infrastructure Dependency CollapseCompetitive Pressure

Key insight: The AI Act’s risk classification maps most strongly to three TopAIThreats domains (see the full taxonomy for all 8 domains and 42 patterns) — Discrimination & Social Harm (DOM-SOC), Privacy & Surveillance (DOM-PRI), and Human–AI Control (DOM-CTL). These three domains account for the majority of high-risk and prohibited AI practices. Security risks (DOM-SEC) appear primarily in the GPAI systemic risk provisions. Economic risks (DOM-ECO), Agentic risks (DOM-AGT), and Information Integrity risks (DOM-INF) are less directly addressed — representing regulatory gaps that organizations should fill through complementary governance frameworks.


How to Use This Mapping

For compliance teams: Use this mapping to understand which TopAIThreats patterns apply to your AI Act risk classification. When conducting the required risk assessment for high-risk systems, this page identifies the specific threat patterns and causal factors to evaluate.

For risk managers: Use this mapping to connect regulatory compliance to operational risk management. The EU AI Act tells you what is required; TopAIThreats tells you what specific threats those requirements are designed to prevent.

For legal teams: Use this mapping to inform AI Act conformity assessments. When documenting that a high-risk system meets robustness requirements, reference the specific adversarial threat patterns that your mitigations address.

For executive leadership: Use the cross-reference summary to understand which business functions are most affected by AI Act requirements and which threat domains create the highest regulatory exposure.


Methodology Note

This mapping references the EU Artificial Intelligence Act as published in the Official Journal of the European Union (Regulation (EU) 2024/1689) and the TopAIThreats taxonomy v3.0. Risk category to threat pattern mappings are maintained by the TopAIThreats editorial team and updated when the AI Act implementing guidance or the taxonomy is revised. This is not an official EU publication — it is an independent cross-reference maintained by TopAIThreats. If you believe this mapping contains inaccuracies, contact us for correction.