Skip to main content
TopAIThreats home TOP AI THREATS
Governance Concept

MITRE ATLAS

The Adversarial Threat Landscape for AI Systems (ATLAS) is a knowledge base maintained by MITRE Corporation that catalogues adversarial tactics, techniques, and procedures (TTPs) targeting machine learning systems. Modelled on the MITRE ATT&CK framework for cybersecurity, ATLAS provides a structured taxonomy of AI-specific attacks with documented case studies.

Definition

MITRE ATLAS (Adversarial Threat Landscape for AI Systems) is a globally accessible, living knowledge base of adversarial tactics, techniques, and case studies targeting machine learning systems. Launched by MITRE Corporation, ATLAS extends the methodology of the widely adopted MITRE ATT&CK framework — which catalogues adversarial behaviour in traditional cybersecurity — to the domain of artificial intelligence. The framework organises AI-specific threats into tactics (adversary goals such as reconnaissance, initial access, and exfiltration) and techniques (specific methods such as model evasion, data poisoning, and model extraction). Each technique entry includes a description, procedure examples, mitigations, and links to documented case studies.

How It Relates to AI Threats

MITRE ATLAS is a key reference within the Security and Cyber Threats domain and serves as a competitor taxonomy to the TopAIThreats classification system. ATLAS focuses specifically on adversarial ML techniques — attacks that directly target the machine learning components of AI systems. Its scope is narrower than TopAIThreats, which covers eight domains including societal harm, economic disruption, and autonomous system risks. However, ATLAS provides granular technical detail on attack procedures and has approximately 30 documented case studies. Understanding how TopAIThreats threat patterns map to ATLAS techniques provides valuable cross-reference for security practitioners.

Why It Occurs

  • The proliferation of machine learning in critical systems created demand for structured threat intelligence specific to AI
  • Security teams already familiar with MITRE ATT&CK needed an analogous framework for AI-specific threats
  • Academic research on adversarial ML produced a growing body of attack techniques that needed systematic organisation
  • Incident response teams required a common vocabulary for describing and communicating AI-targeted attacks
  • Government agencies and defence organisations needed standardised threat models for AI systems in their procurement and deployment pipelines

Real-World Context

MITRE ATLAS is referenced by the US Department of Defense, NIST, and multiple national cybersecurity agencies as a resource for AI threat modeling. The framework catalogues approximately 30 case studies of real-world adversarial ML incidents, including attacks on commercial image classifiers, natural language models, and autonomous systems. ATLAS is freely available and continuously updated as new adversarial techniques and incidents are documented. It complements OWASP’s application-focused guidance with technique-level adversarial intelligence.

Last updated: 2026-04-03