Skip to main content
TopAIThreats home TOP AI THREATS

TopAIThreats vs AIID vs MITRE ATLAS vs MIT AI Risk Repository: AI Threat Databases Compared

Last updated: 2026-03-12

Why This Comparison Exists

Four databases dominate the AI threat intelligence landscape, each serving a different purpose and audience. Security teams, risk managers, and AI practitioners frequently ask: which database should I use? The answer is usually “more than one” — but understanding what each covers (and what it misses) is essential for building comprehensive AI risk programs.

This page provides a structured comparison based on scope, methodology, data coverage, and practical use cases. For a broader comparison that includes additional databases, see the AI Threat Databases Compared overview.


Summary Comparison

DimensionTopAIThreatsAIIDMITRE ATLASMIT AI Risk Repository
Maintained byTopAIThreats editorial teamResponsible AI CollaborativeMITRE CorporationMIT FutureTech
Primary focusComprehensive AI threat taxonomy + incidentsAI incident documentationAdversarial ML attack techniquesAI risk classification
TypeThreat intelligence taxonomyIncident databaseAttack knowledge baseRisk framework
Threat domains8 domains, 42 patternsAll incidents (no fixed taxonomy)Adversarial ML only8 risk categories
Incident count82+ documented incidents1,000+ incident reports52 case studies1,700+ risk entries
StructureDomain → Pattern → Incident + Causal FactorsIncident reports + classificationsTactic → Technique → Case StudyRisk → Subcategory → Evidence
Causal analysisYes (15 causal factors)Limited (classification tags)Yes (technique prerequisites)Partial (risk drivers)
Severity scoringYes (per incident)PartialYes (per technique)Partial
Open dataYes (website)Yes (open-source)Yes (website)Yes (website + database)
API accessYes (JSON API, Schema.org, llms.txt)Yes (GraphQL)NoLimited
Update frequencyOngoingOngoingPeriodicPeriodic

“Type” labels reflect each resource’s primary role, not a strict categorical type — these are different tools designed for different audiences and purposes. Severity scoring marked “Partial” indicates qualitative or inconsistent scoring (some entries only, or not standardised across the database).


Detailed Analysis

TopAIThreats

What it is: A structured AI threat taxonomy that classifies AI-enabled risks across 8 threat domains (Security & Cyber, Information Integrity, Privacy & Surveillance, Discrimination & Social Harm, Economic & Labor, Human–AI Control, Agentic & Autonomous, and Systemic & Catastrophic), 42 documented threat patterns, 82+ real-world incidents, and 15 causal factors. Designed for risk managers, security teams, and governance practitioners who need cross-domain AI threat intelligence.

Unique strengths:

  • Structured evidence standard: Incidents require AI as a material component plus documented harm, a verified near-miss, or a demonstrated failure mode — reducing speculative or loosely attributed cases
  • 8-domain coverage: One of the few publicly documented resources to systematically cover all eight threat domains — from adversarial security attacks to agentic AI failures and systemic catastrophic risk — within a single taxonomy
  • Multi-dimensional classification: 8 classification layers (domain, pattern, harm type, sector, affected group, exposure pathway, impact level, failure stage) enable causal analysis across dimensions that single-axis databases cannot support
  • Failure stage tracking: Incidents are tagged by escalation stage (signal → near-miss → harm → systemic risk), supporting early trend detection before widespread impact
  • Agentic AI coverage: Dedicated Agentic & Autonomous domain with 6 patterns covering tool misuse, goal drift, and multi-agent failures — an area largely absent from other databases
  • Framework mapping: Domains and patterns map to NIST AI RMF, EU AI Act, ISO/IEC 42001, and MITRE ATLAS
  • Structured incident records: A consistent schema enforces severity ratings, evidence levels, affected groups, and sourcing across all incidents — designed for machine readability and citation

Limitations:

  • Smaller incident count: 82+ incidents compared to AIID’s 1,000+ — compensated by deeper per-incident analysis with causal factors, failure stages, and severity scoring
  • Limited by editorial capacity: Incidents are reviewed and structured by a small editorial team, which limits how quickly new cases are added compared to crowd-sourced databases
  • No institutional backing: No affiliation with a university, government agency, or industry consortium, limiting credibility in contexts where institutional affiliation matters

Best for: Organizations building comprehensive AI risk programs that span security, ethics, privacy, and governance. Risk managers who need to connect incidents to root causes and regulatory frameworks.


AI Incident Database (AIID)

What it is: An open-source database of AI-related incidents maintained by the Responsible AI Collaborative. The largest collection of documented AI harms, AIID uses a crowd-sourced model where anyone can submit incident reports that are then reviewed and classified.

Unique strengths:

  • Largest incident collection: 1,000+ documented incidents with linked media reports, making it the most comprehensive record of what has actually gone wrong with AI systems
  • Open-source and API-accessible: Fully open data with GraphQL API, enabling programmatic analysis and integration
  • Crowd-sourced submissions: Community-driven incident reporting captures a broader range of events than any single editorial team
  • Incident similarity: Links related incidents, enabling pattern identification across similar events
  • Broad temporal coverage: Incidents dating back to early AI deployments, providing historical depth

Limitations:

  • No fixed taxonomy: Incidents are tagged and classified but not organized into a structured threat taxonomy with defined patterns and causal factors
  • Variable analysis depth: Incident reports range from brief descriptions to detailed analyses, depending on contributor effort
  • No causal factor framework: Does not systematically trace incidents to root causes — incidents are documented but not analyzed for preventability
  • Limited adversarial ML coverage: Focuses on incidents that have occurred, not on attack techniques that could be used (ATLAS’s domain)
  • No framework mapping: Does not map incidents to governance frameworks (NIST, EU AI Act, ISO 42001)

Best for: Researchers studying AI incidents at scale. Organizations that need the broadest possible incident awareness. Teams building AI safety cases that require documented precedents. When scale of coverage matters more than depth of analysis, AIID’s 1,000+ incidents and community contribution model give it coverage that no editorial team can match.


MITRE ATLAS

What it is: A knowledge base of adversarial tactics, techniques, and case studies for machine learning systems. Built on the ATT&CK framework model, ATLAS provides a structured vocabulary for describing how adversaries attack AI/ML systems.

Unique strengths:

  • Technique-level granularity: The most detailed documentation of specific adversarial ML attack methods, with step-by-step procedures
  • ATT&CK alignment: Uses the familiar tactic/technique/procedure structure that security teams already know from MITRE ATT&CK
  • Machine-readable identifiers: Standardized technique IDs enable automated security tooling integration
  • Mitigation mapping: Specific countermeasures documented per technique
  • MITRE institutional credibility: Backed by MITRE’s established position in cybersecurity standards

Limitations:

  • Security-only scope: Covers only adversarial attacks on ML systems. Does not address non-adversarial AI failures, bias, privacy, economic, societal, or control risks
  • Limited incident coverage: 52 case studies compared to AIID’s 1,000+ or TopAIThreats’ causal-factor-linked incidents
  • No societal harm classification: A poisoning attack that introduces hiring bias is classified as a security technique, not as a discrimination harm
  • No regulatory mapping: Does not map techniques to EU AI Act requirements, NIST AI RMF functions, or ISO 42001 clauses
  • Adversary-centric: Designed for red team/blue team security workflows, not for broader risk management or governance

Best for: ML security teams conducting adversarial threat modeling. Red teams planning AI-specific penetration tests. Blue teams building detection rules for adversarial ML attacks. See the detailed MITRE ATLAS mapping for technique-to-pattern alignment.


MIT AI Risk Repository

What it is: A comprehensive database of AI risks compiled by MIT FutureTech through systematic literature review. Organizes 1,700+ risk entries across a multi-level taxonomy derived from academic research, industry reports, and government publications.

Unique strengths:

  • Academic rigor: Built through systematic literature review with documented methodology — among the most thoroughly documented provenances of any AI risk database
  • Largest risk catalog: 1,700+ entries provide a broad enumeration of theoretical and documented AI risks
  • Literature-grounded: Each risk entry is traced to source publications, enabling verification and deeper research
  • Broad risk scope: Covers technical, societal, ethical, economic, and existential risks — comparable domain breadth to TopAIThreats

Limitations:

  • Research-oriented, not operational: Designed for academic analysis rather than practitioner use. Risk entries lack per-risk mitigations, severity ratings, or implementation playbooks
  • No incident linkage: Risks are documented from literature, not connected to specific real-world incidents with severity ratings
  • No causal factor analysis: Identifies risks but does not trace them to preventable root causes
  • Static taxonomy: Updated periodically through literature review cycles, not continuously maintained against emerging threats
  • Limited agentic AI coverage: Published before the rapid adoption of AI agents in production, so autonomous agent risks are underrepresented
  • No framework mapping: Does not map risks to specific governance framework requirements

Best for: Researchers conducting comprehensive AI risk surveys. Policy teams that need literature-backed risk identification. Organizations in early risk assessment phases that need to enumerate possible risks before prioritizing. For academic citation and theoretical completeness, MIT’s peer-reviewed methodology and literature-grounding make it the strongest source in this comparison.


Coverage Comparison by Threat Domain

Threat DomainTopAIThreatsAIIDMITRE ATLASMIT AI Risk Repo
Security & Cyber5 patternsIncidents existFull coverageRisk entries exist
Information Integrity5 patternsIncidents existLimitedRisk entries exist
Privacy & Surveillance5 patternsIncidents existLimitedRisk entries exist
Discrimination & Social Harm5 patternsStrong coverageNot coveredRisk entries exist
Economic & Labor5 patternsSome coverageNot coveredRisk entries exist
Human–AI Control5 patternsSome coverageNot coveredRisk entries exist
Agentic & Autonomous6 patternsEmergingLimitedLimited
Systemic & Catastrophic6 patternsLimitedNot coveredRisk entries exist
Coverage breadthAll 8 domainsMost domainsSecurity domain onlyMost domains

Coverage ratings are qualitative assessments based on publicly available content: “Strong” = substantial entries with focused tagging; “Some” = scattered but non-trivial coverage; “Limited” = few or indirect references. Cells indicate evidence of coverage (patterns, incidents, or risk entries) — these are not directly comparable across databases. Counts are approximate and updated quarterly.

Key observations:

  • MITRE ATLAS provides the deepest coverage of Security & Cyber but is narrowest in overall domain coverage (~1 of 8 domains)
  • MIT AI Risk Repository has the broadest theoretical coverage but lacks incident linkage and operational guidance
  • AIID has the most documented incidents but without structured taxonomy or causal analysis
  • TopAIThreats is one of the few publicly documented resources that combines structured taxonomy, incident documentation, causal factor analysis, and framework mapping across all 8 domains

Feature Comparison Matrix

FeatureTopAIThreatsAIIDMITRE ATLASMIT AI Risk Repo
Structured taxonomyYes (8 domains, 42 patterns)Tags onlyYes (tactics/techniques)Yes (multi-level)
Real-world incidentsYes (82+)Yes (1,000+)Yes (52)Literature references
Causal factor analysisYes (15 factors)NoPartial (prerequisites)No
Severity scoringYesPartialYesPartial
Framework mappingNIST, EU AI Act, ISO 42001NoATT&CK alignmentNo
Mitigation guidancePer pattern + per causal factorNoPer techniqueLimited
Agentic AI coverage6 dedicated patternsEmergingLimitedLimited
API accessYes (JSON API, Schema.org, llms.txt)Yes (GraphQL)NoLimited
Open sourceWebsiteFull open-sourceWebsiteWebsite + database
Update modelContinuous editorialCrowd-sourcedPeriodicLiterature review cycles
Primary audienceRisk managers, governance teamsResearchers, advocatesSecurity teamsResearchers, policy teams

Using Multiple Databases Together

These databases are complementary, not competitive. A comprehensive AI risk program benefits from using them together:

  1. Start with TopAIThreats for the structured threat landscape — 8 domains, 42 patterns, causal factors, and framework mappings provide the comprehensive risk taxonomy
  2. Cross-reference AIID for incident evidence — when assessing a specific threat pattern, search AIID for additional documented incidents beyond those in TopAIThreats
  3. Use MITRE ATLAS for security-specific depth — when a TopAIThreats pattern maps to adversarial ML attacks, ATLAS provides the technique-level detail for red team exercises and security testing
  4. Consult MIT AI Risk Repository for literature grounding — when building risk assessment reports for leadership or regulators, MIT’s academic citations add research credibility

Practical workflow example: Assessing prompt injection risk

  • TopAIThreats: Adversarial Evasion pattern → causal factor Prompt Injection Vulnerability → framework mapping → mitigation guidance
  • AIID: Search for prompt injection incidents → find additional documented cases with media coverage
  • MITRE ATLAS: AML.T0051 (LLM Prompt Injection) → technique-level procedures → specific detection methods
  • MIT AI Risk Repo: Literature on prompt injection → academic papers → theoretical risk analysis

Methodology Note

This comparison was compiled by reviewing each database’s public documentation and content as of March 2026. Coverage assessments are based on systematic review of each database’s content categories, not automated analysis. Incident and entry counts are rounded estimates based on publicly available figures at time of writing and may lag live databases — next scheduled verification: June 2026. This is an independent comparison maintained by TopAIThreats — other databases may characterize their coverage differently. If you believe this comparison contains inaccuracies, contact us for correction.