Skip to main content
TopAIThreats home TOP AI THREATS

MITRE ATLAS Mapping: AI Adversarial Techniques vs Real-World AI Threats

Last updated: 2026-03-12

Why This Mapping Exists

MITRE ATLAS (Adversarial Threat Landscape for Artificial Intelligence Systems) is the most comprehensive knowledge base of adversarial tactics and techniques against AI systems. Built on the ATT&CK framework model, it provides a structured vocabulary for describing how adversaries attack machine learning systems.

However, ATLAS focuses exclusively on adversarial ML security — it does not address non-adversarial AI failures, societal harms, privacy risks, or systemic threats. This mapping bridges that gap by connecting each ATLAS tactic to the corresponding TopAIThreats threat patterns, linking to documented incidents, and identifying which of the 8 risk domains are affected. The result is a reference that places adversarial ML attacks within the broader AI threat landscape. Mappings indicate plausible relationships based on pattern and incident analysis — not exhaustive or official coverage of every ATLAS technique or TopAIThreats threat.

Designed for red and blue teams, risk managers, and compliance leads who already use MITRE ATLAS and need to understand its relationship to real-world AI harms across the full threat landscape.


ATLAS Tactics Mapped to TopAIThreats

ML Attack Staging (AML.TA0000 — Reconnaissance)

ATLAS description: Techniques used to gather information about a target ML system — model architecture, training data characteristics, API behavior, and deployment configuration.

TopAIThreats mapping:

Beyond ATLAS: Reconnaissance against AI systems differs from traditional cyber reconnaissance because the model itself leaks information through its outputs. Every API response contains implicit information about model architecture and training data — a fundamental property that ATLAS documents but that TopAIThreats traces to the causal factor of model opacity.


ML Attack Staging (AML.TA0001 — Resource Development)

ATLAS description: Techniques for developing resources to support ML attacks — creating adversarial examples, training substitute models, developing attack tooling, and acquiring computational resources.

TopAIThreats mapping:

Beyond ATLAS: The resource development stage is where AI threat development becomes self-reinforcing. Adversaries use AI to develop attacks against AI — a recursive dynamic that TopAIThreats classifies under both Security (DOM-SEC) and Systemic (DOM-SYS) risk domains.


ML Attack Staging (AML.TA0002 — Initial Access)

ATLAS description: Techniques for gaining initial access to ML systems — compromising ML supply chains, exploiting public-facing ML APIs, accessing training data repositories, and social engineering ML teams.

TopAIThreats mapping:

Beyond ATLAS: Initial access to ML systems frequently occurs through the training data supply chain — a vector that traditional cybersecurity frameworks underweight. TopAIThreats documents cases where compromised open-source model weights or poisoned fine-tuning datasets provided persistent access that survived standard security controls.


ML Model Access (AML.TA0003 — ML Model Access)

ATLAS description: Techniques for interacting with ML models to execute attacks — inference API access, gradient-based access, model weight access, and training environment access.

TopAIThreats mapping:

Beyond ATLAS: The level of model access determines the attack surface. Black-box API access enables different attacks than white-box weight access. TopAIThreats distinguishes these access levels through causal factor analysis — inadequate access controls at the API level create different risk profiles than model weight exposure through supply chain compromise.

Note: Model Opacity and Inadequate Access Controls appear as recurring causal factors across Reconnaissance, ML Model Access, and Exfiltration tactics. These are structural weaknesses that cut across multiple ATLAS tactics — organizations addressing only one tactic at a time may miss the underlying architectural conditions enabling the broader attack surface.


Evasion (AML.TA0004 — ML Attack Execution: Evasion)

ATLAS description: Techniques that modify inputs to cause ML models to produce incorrect outputs at inference time — adversarial examples, perturbation attacks, physical-domain attacks, and prompt injection.

TopAIThreats mapping:

Beyond ATLAS: ATLAS treats evasion as a security technique. TopAIThreats reveals that the impact of evasion attacks spans multiple domains:

  • Security & Cyber (DOM-SEC): Bypassing safety filters or content moderation
  • Information Integrity (DOM-INF): Causing hallucinated or false outputs
  • Discrimination & Social Harm (DOM-SOC): Evading fairness constraints to produce biased outputs
  • Agentic & Autonomous (DOM-AGT): Prompt injection causing autonomous agents to take unauthorized actions

This cross-domain impact means that purely security-focused mitigations for evasion are insufficient. Treat evasion findings as cross-functional risk items — not just security tickets — involving information integrity, fairness, and agentic safety teams.


Poisoning (AML.TA0005 — ML Attack Execution: Poisoning)

ATLAS description: Techniques that manipulate training data or model training processes to introduce backdoors, degrade model performance, or bias model outputs.

TopAIThreats mapping:

Beyond ATLAS: ATLAS focuses on adversarial poisoning — intentional manipulation. TopAIThreats distinguishes between:

  1. Adversarial (intentional) — DOM-SEC: backdoor insertion, targeted corruption
  2. Systemic (unintentional) — DOM-SOC: underrepresentation, label noise, historical bias
  3. Agentic — DOM-AGT: corrupting RAG knowledge bases or agent memory stores

These three variants require fundamentally different mitigations: adversarial poisoning needs integrity verification, systemic bias needs representative curation, and memory poisoning needs runtime validation. Conflating all three as “a security issue” leads to controls that address only the intentional variant.


Exfiltration (AML.TA0006 — ML Attack Execution: Exfiltration)

ATLAS description: Techniques for extracting information from ML models — training data extraction, model theft, membership inference, and model inversion attacks.

TopAIThreats mapping:

Beyond ATLAS: Model exfiltration has implications beyond security:

  • Privacy & Surveillance (DOM-PRI): Training data extraction can expose personal data, creating GDPR/CCPA violations and regulatory exposure under data protection law
  • Economic & Labor (DOM-ECO): Model theft represents intellectual property loss and competitive advantage erosion
  • Discrimination & Social Harm (DOM-SOC): Membership inference can reveal whether specific individuals’ data was used in training, raising consent, fairness, and potential discrimination law exposure

Exfiltration incidents should be assessed not only as security breaches but as potential privacy, IP, and discrimination law events — involving legal, compliance, and DPO stakeholders alongside security teams.


Impact (AML.TA0007 — Impact)

ATLAS description: Techniques that use ML attack outcomes to achieve adversary objectives — degrading model performance, denying ML services, manipulating model outputs, and eroding trust in AI systems.

TopAIThreats mapping:

Beyond ATLAS: ATLAS’s impact tactic is where the adversarial lens becomes insufficient. The real-world impacts of ML attacks frequently manifest in non-security domains — a successful poisoning attack on a hiring model creates discrimination harm (DOM-SOC), not just a security incident (DOM-SEC). TopAIThreats documents these cross-domain impact chains.


Cross-Reference Summary

ATLAS TacticPrimary TopAIThreats PatternPrimary DomainKey Causal Factor
ReconnaissanceModel Inversion & Data ExtractionSecurity & CyberModel Opacity
Resource DevelopmentAutomated Vulnerability DiscoverySecurity & CyberWeaponization
Initial AccessData PoisoningSecurity & CyberMisconfigured Deployment
ML Model AccessModel Inversion & Data ExtractionSecurity & CyberInadequate Access Controls
EvasionAdversarial EvasionSecurity & CyberPrompt Injection Vulnerability
PoisoningData PoisoningSecurity & CyberTraining Data Bias
ExfiltrationModel Inversion & Data ExtractionSecurity & CyberModel Opacity
ImpactInfrastructure Dependency CollapseSystemic & CatastrophicOver-Automation

Key insight: All 8 ATLAS tactics map to Security & Cyber (DOM-SEC) as their primary domain — expected, given ATLAS is a security framework. However, “Primary Domain” in the table above means where the attack originates, not where the harm lands. Secondary impacts documented by TopAIThreats consistently cross into Privacy & Surveillance (exfiltration), Discrimination & Social Harm (poisoning), Information Integrity (evasion), and Systemic & Catastrophic (impact). Organizations that treat ATLAS findings as purely security issues will miss the broader risk surface.


What ATLAS Covers That TopAIThreats Does Not

ATLAS provides granular technique-level detail that TopAIThreats does not replicate:

  • Specific attack implementations: Pixel-level perturbations, gradient-based adversarial examples, membership inference algorithms
  • ATT&CK-style technique IDs: Machine-readable identifiers for adversarial ML techniques
  • Procedure examples: Step-by-step attack procedures documented from research papers
  • Mitigation mapping: Specific technical countermeasures per technique

TopAIThreats operates at the threat pattern level, not the technique level. Use ATLAS for “how attackers execute adversarial ML attacks” and TopAIThreats for “what are the full-spectrum risks and real-world impacts of AI systems.”


What TopAIThreats Covers That ATLAS Does Not

ATLAS is limited to adversarial attacks on ML systems. It does not address:

These 7 non-security domains represent the majority of documented AI harm in the TopAIThreats incident database. Organizations that rely solely on ATLAS for AI risk management have visibility into approximately 1 of 8 threat domains.


How to Use This Mapping

For red teams: Use ATLAS for technique-level attack planning, then use this mapping to understand the full impact chain — which TopAIThreats patterns and domains each technique can trigger.

For blue teams: Use this mapping to connect ATLAS detections to broader organizational risk. When your ML security monitoring detects an ATLAS technique, this page tells you which threat domains may be affected and what causal factors to investigate.

For risk managers: Use this mapping to understand where ATLAS fits within comprehensive AI risk management. ATLAS covers adversarial security threats; TopAIThreats covers the remaining 7 threat domains that ATLAS does not address.

For compliance teams: Cross-reference with the AI Governance Frameworks Compared page to understand how ATLAS techniques relate to regulatory requirements.


Methodology Note

This mapping references MITRE ATLAS v4.6 and the TopAIThreats taxonomy v3.0. Tactic-to-pattern mappings are maintained by the TopAIThreats editorial team and updated when either ATLAS or the taxonomy is revised. This is not an official MITRE publication — it is an independent cross-reference maintained by TopAIThreats. If you believe this mapping contains inaccuracies, contact us for correction.