MITRE ATLAS Mapping: AI Adversarial Techniques vs Real-World AI Threats
Last updated: 2026-03-12
Why This Mapping Exists
MITRE ATLAS (Adversarial Threat Landscape for Artificial Intelligence Systems) is the most comprehensive knowledge base of adversarial tactics and techniques against AI systems. Built on the ATT&CK framework model, it provides a structured vocabulary for describing how adversaries attack machine learning systems.
However, ATLAS focuses exclusively on adversarial ML security — it does not address non-adversarial AI failures, societal harms, privacy risks, or systemic threats. This mapping bridges that gap by connecting each ATLAS tactic to the corresponding TopAIThreats threat patterns, linking to documented incidents, and identifying which of the 8 risk domains are affected. The result is a reference that places adversarial ML attacks within the broader AI threat landscape. Mappings indicate plausible relationships based on pattern and incident analysis — not exhaustive or official coverage of every ATLAS technique or TopAIThreats threat.
Designed for red and blue teams, risk managers, and compliance leads who already use MITRE ATLAS and need to understand its relationship to real-world AI harms across the full threat landscape.
ATLAS Tactics Mapped to TopAIThreats
ML Attack Staging (AML.TA0000 — Reconnaissance)
ATLAS description: Techniques used to gather information about a target ML system — model architecture, training data characteristics, API behavior, and deployment configuration.
TopAIThreats mapping:
- Primary pattern: Model Inversion & Data Extraction (DOM-SEC) — techniques that probe model behavior to extract information about internals
- Related patterns: Adversarial Evasion (DOM-SEC) — reconnaissance is a prerequisite for targeted evasion attacks
- Causal factors: Model Opacity, Inadequate Access Controls
Beyond ATLAS: Reconnaissance against AI systems differs from traditional cyber reconnaissance because the model itself leaks information through its outputs. Every API response contains implicit information about model architecture and training data — a fundamental property that ATLAS documents but that TopAIThreats traces to the causal factor of model opacity.
ML Attack Staging (AML.TA0001 — Resource Development)
ATLAS description: Techniques for developing resources to support ML attacks — creating adversarial examples, training substitute models, developing attack tooling, and acquiring computational resources.
TopAIThreats mapping:
- Primary pattern: Automated Vulnerability Discovery (DOM-SEC) — using AI systems to develop attack capabilities against other AI systems
- Related patterns: AI-Morphed Malware (DOM-SEC) — AI-assisted development of attack tools
- Causal factors: Adversarial Attack, Weaponization
Beyond ATLAS: The resource development stage is where AI threat development becomes self-reinforcing. Adversaries use AI to develop attacks against AI — a recursive dynamic that TopAIThreats classifies under both Security (DOM-SEC) and Systemic (DOM-SYS) risk domains.
ML Attack Staging (AML.TA0002 — Initial Access)
ATLAS description: Techniques for gaining initial access to ML systems — compromising ML supply chains, exploiting public-facing ML APIs, accessing training data repositories, and social engineering ML teams.
TopAIThreats mapping:
- Primary pattern: Data Poisoning (DOM-SEC) — supply chain compromise through training data manipulation
- Related patterns: Tool Misuse & Privilege Escalation (DOM-AGT) — exploiting ML API access to escalate privileges in agentic systems
- Causal factors: Misconfigured Deployment, Inadequate Access Controls
Beyond ATLAS: Initial access to ML systems frequently occurs through the training data supply chain — a vector that traditional cybersecurity frameworks underweight. TopAIThreats documents cases where compromised open-source model weights or poisoned fine-tuning datasets provided persistent access that survived standard security controls.
ML Model Access (AML.TA0003 — ML Model Access)
ATLAS description: Techniques for interacting with ML models to execute attacks — inference API access, gradient-based access, model weight access, and training environment access.
TopAIThreats mapping:
- Primary pattern: Model Inversion & Data Extraction (DOM-SEC) — exploiting model access to extract training data or model internals
- Related patterns: Adversarial Evasion (DOM-SEC) — using model access to craft targeted adversarial inputs
- Causal factors: Inadequate Access Controls, Model Opacity
Beyond ATLAS: The level of model access determines the attack surface. Black-box API access enables different attacks than white-box weight access. TopAIThreats distinguishes these access levels through causal factor analysis — inadequate access controls at the API level create different risk profiles than model weight exposure through supply chain compromise.
Note: Model Opacity and Inadequate Access Controls appear as recurring causal factors across Reconnaissance, ML Model Access, and Exfiltration tactics. These are structural weaknesses that cut across multiple ATLAS tactics — organizations addressing only one tactic at a time may miss the underlying architectural conditions enabling the broader attack surface.
Evasion (AML.TA0004 — ML Attack Execution: Evasion)
ATLAS description: Techniques that modify inputs to cause ML models to produce incorrect outputs at inference time — adversarial examples, perturbation attacks, physical-domain attacks, and prompt injection.
TopAIThreats mapping:
- Primary pattern: Adversarial Evasion (DOM-SEC) — crafted inputs designed to bypass model safety controls or alter model behavior
- Related patterns: Misinformation & Hallucinated Content (DOM-INF) — evasion attacks that cause models to produce false information, Cascading Hallucinations (DOM-AGT) — evasion in multi-agent pipelines
- Causal factors: Prompt Injection Vulnerability, Adversarial Attack
Beyond ATLAS: ATLAS treats evasion as a security technique. TopAIThreats reveals that the impact of evasion attacks spans multiple domains:
- Security & Cyber (DOM-SEC): Bypassing safety filters or content moderation
- Information Integrity (DOM-INF): Causing hallucinated or false outputs
- Discrimination & Social Harm (DOM-SOC): Evading fairness constraints to produce biased outputs
- Agentic & Autonomous (DOM-AGT): Prompt injection causing autonomous agents to take unauthorized actions
This cross-domain impact means that purely security-focused mitigations for evasion are insufficient. Treat evasion findings as cross-functional risk items — not just security tickets — involving information integrity, fairness, and agentic safety teams.
Poisoning (AML.TA0005 — ML Attack Execution: Poisoning)
ATLAS description: Techniques that manipulate training data or model training processes to introduce backdoors, degrade model performance, or bias model outputs.
TopAIThreats mapping:
- Primary pattern: Data Poisoning (DOM-SEC) — direct manipulation of training pipelines
- Related patterns: Memory Poisoning (DOM-AGT) — poisoning agent memory/context stores, Data Imbalance Bias (DOM-SOC) — systemic data quality issues that produce discriminatory outcomes
- Causal factors: Training Data Bias, Adversarial Attack, Insufficient Safety Testing
Beyond ATLAS: ATLAS focuses on adversarial poisoning — intentional manipulation. TopAIThreats distinguishes between:
- Adversarial (intentional) — DOM-SEC: backdoor insertion, targeted corruption
- Systemic (unintentional) — DOM-SOC: underrepresentation, label noise, historical bias
- Agentic — DOM-AGT: corrupting RAG knowledge bases or agent memory stores
These three variants require fundamentally different mitigations: adversarial poisoning needs integrity verification, systemic bias needs representative curation, and memory poisoning needs runtime validation. Conflating all three as “a security issue” leads to controls that address only the intentional variant.
Exfiltration (AML.TA0006 — ML Attack Execution: Exfiltration)
ATLAS description: Techniques for extracting information from ML models — training data extraction, model theft, membership inference, and model inversion attacks.
TopAIThreats mapping:
- Primary pattern: Model Inversion & Data Extraction (DOM-SEC) — extracting training data, model weights, or proprietary information
- Related patterns: Sensitive Attribute Inference (DOM-PRI) — inferring private attributes from model behavior, Re-identification Attacks (DOM-PRI) — using extracted data to re-identify individuals
- Causal factors: Model Opacity, Inadequate Access Controls
Beyond ATLAS: Model exfiltration has implications beyond security:
- Privacy & Surveillance (DOM-PRI): Training data extraction can expose personal data, creating GDPR/CCPA violations and regulatory exposure under data protection law
- Economic & Labor (DOM-ECO): Model theft represents intellectual property loss and competitive advantage erosion
- Discrimination & Social Harm (DOM-SOC): Membership inference can reveal whether specific individuals’ data was used in training, raising consent, fairness, and potential discrimination law exposure
Exfiltration incidents should be assessed not only as security breaches but as potential privacy, IP, and discrimination law events — involving legal, compliance, and DPO stakeholders alongside security teams.
Impact (AML.TA0007 — Impact)
ATLAS description: Techniques that use ML attack outcomes to achieve adversary objectives — degrading model performance, denying ML services, manipulating model outputs, and eroding trust in AI systems.
TopAIThreats mapping:
- Primary patterns: Multiple — impact maps across the full taxonomy:
- Infrastructure Dependency Collapse (DOM-SYS) — denial of ML services cascading to dependent systems
- Accumulative Risk & Trust Erosion (DOM-SYS) — repeated attacks eroding institutional trust in AI
- Consensus Reality Erosion (DOM-INF) — manipulated AI outputs degrading shared factual understanding
- Causal factors: Misconfigured Deployment, Over-Automation
Beyond ATLAS: ATLAS’s impact tactic is where the adversarial lens becomes insufficient. The real-world impacts of ML attacks frequently manifest in non-security domains — a successful poisoning attack on a hiring model creates discrimination harm (DOM-SOC), not just a security incident (DOM-SEC). TopAIThreats documents these cross-domain impact chains.
Cross-Reference Summary
| ATLAS Tactic | Primary TopAIThreats Pattern | Primary Domain | Key Causal Factor |
|---|---|---|---|
| Reconnaissance | Model Inversion & Data Extraction | Security & Cyber | Model Opacity |
| Resource Development | Automated Vulnerability Discovery | Security & Cyber | Weaponization |
| Initial Access | Data Poisoning | Security & Cyber | Misconfigured Deployment |
| ML Model Access | Model Inversion & Data Extraction | Security & Cyber | Inadequate Access Controls |
| Evasion | Adversarial Evasion | Security & Cyber | Prompt Injection Vulnerability |
| Poisoning | Data Poisoning | Security & Cyber | Training Data Bias |
| Exfiltration | Model Inversion & Data Extraction | Security & Cyber | Model Opacity |
| Impact | Infrastructure Dependency Collapse | Systemic & Catastrophic | Over-Automation |
Key insight: All 8 ATLAS tactics map to Security & Cyber (DOM-SEC) as their primary domain — expected, given ATLAS is a security framework. However, “Primary Domain” in the table above means where the attack originates, not where the harm lands. Secondary impacts documented by TopAIThreats consistently cross into Privacy & Surveillance (exfiltration), Discrimination & Social Harm (poisoning), Information Integrity (evasion), and Systemic & Catastrophic (impact). Organizations that treat ATLAS findings as purely security issues will miss the broader risk surface.
What ATLAS Covers That TopAIThreats Does Not
ATLAS provides granular technique-level detail that TopAIThreats does not replicate:
- Specific attack implementations: Pixel-level perturbations, gradient-based adversarial examples, membership inference algorithms
- ATT&CK-style technique IDs: Machine-readable identifiers for adversarial ML techniques
- Procedure examples: Step-by-step attack procedures documented from research papers
- Mitigation mapping: Specific technical countermeasures per technique
TopAIThreats operates at the threat pattern level, not the technique level. Use ATLAS for “how attackers execute adversarial ML attacks” and TopAIThreats for “what are the full-spectrum risks and real-world impacts of AI systems.”
What TopAIThreats Covers That ATLAS Does Not
ATLAS is limited to adversarial attacks on ML systems. It does not address:
- Non-adversarial failures: Hallucination Tendency, Cascading Hallucinations, Overreliance & Automation Bias
- Societal harms: Algorithmic Amplification, Allocational Harm, Representational Harm
- Privacy risks: Mass Surveillance Amplification, Behavioral Profiling Without Consent
- Economic risks: Automation-Induced Job Degradation, Power & Data Concentration
- Control risks: Loss of Human Agency, Implicit Authority Transfer
- Catastrophic risks: Lethal Autonomous Weapon Systems, Strategic Misalignment
- Agentic risks: Goal Drift, Agent-to-Agent Propagation, Multi-Agent Coordination Failures
These 7 non-security domains represent the majority of documented AI harm in the TopAIThreats incident database. Organizations that rely solely on ATLAS for AI risk management have visibility into approximately 1 of 8 threat domains.
How to Use This Mapping
For red teams: Use ATLAS for technique-level attack planning, then use this mapping to understand the full impact chain — which TopAIThreats patterns and domains each technique can trigger.
For blue teams: Use this mapping to connect ATLAS detections to broader organizational risk. When your ML security monitoring detects an ATLAS technique, this page tells you which threat domains may be affected and what causal factors to investigate.
For risk managers: Use this mapping to understand where ATLAS fits within comprehensive AI risk management. ATLAS covers adversarial security threats; TopAIThreats covers the remaining 7 threat domains that ATLAS does not address.
For compliance teams: Cross-reference with the AI Governance Frameworks Compared page to understand how ATLAS techniques relate to regulatory requirements.
Methodology Note
This mapping references MITRE ATLAS v4.6 and the TopAIThreats taxonomy v3.0. Tactic-to-pattern mappings are maintained by the TopAIThreats editorial team and updated when either ATLAS or the taxonomy is revised. This is not an official MITRE publication — it is an independent cross-reference maintained by TopAIThreats. If you believe this mapping contains inaccuracies, contact us for correction.