Skip to main content
TopAIThreats home TOP AI THREATS

AI Threat Databases Compared

Last updated: 2026-03-11

Why This Comparison Exists

Multiple databases document AI-related threats, risks, and incidents. Each serves a different purpose, covers different ground, and is built for a different audience. This page provides a structured, evidence-based comparison to help researchers, risk managers, and AI practitioners choose the right resource for their needs — or understand how to use them together.

This comparison covers four databases that appear most frequently in AI risk and safety discussions: the AI Incident Database (AIID), MITRE ATLAS, the MIT AI Risk Repository, and TopAIThreats. We also address the OWASP Top 10 for LLM Applications, which is not a database but is frequently referenced alongside these resources.

We maintain this comparison because we believe the field benefits from clarity about what each resource does and does not cover. Where competitors are stronger, we say so.


Summary Comparison

DimensionAIIDMITRE ATLASMIT AI Risk RepositoryTopAIThreatsOWASP LLM Top 10
Maintained byResponsible AI CollaborativeMITRE CorporationMIT FutureTechIndependentOWASP Foundation
TypeIncident databaseAdversarial threat matrixRisk classificationIncident database + taxonomySecurity standard
Entries1,000+ incidents¹52 case studies, 155 techniques¹1,700+ risks¹82 incidents, 42 patterns10 vulnerability categories
ScopeAll AI harmsAdversarial ML attacks onlyAll AI risks (theoretical + observed)AI-enabled threats across 8 domainsLLM application security only
ClassificationMinimal (community tags)ATT&CK-style matrix (tactics/techniques)2-axis taxonomy (causal/domain) from academic paper8-domain taxonomy with causal factors, severity, evidence levelsRisk-ranked list
Update frequencyContinuous (community-contributed)QuarterlyInfrequent (static dataset)Daily (automated discovery pipeline)Annual major versions
Machine-readableAPI availableSTIX formatCSV downloadJSON APIs, Schema.org, llms.txtPDF/web
Target audienceJournalists, researchers, publicRed teamers, security practitionersAcademic researchers, policymakersLLMs, researchers, risk managers, practitionersAppSec teams, developers

¹ External stats last verified March 2026. Competitor entry counts are approximate and change over time. We verify these numbers quarterly — next scheduled check: June 2026. If you notice an outdated figure, contact us for correction.


Detailed Analysis

AI Incident Database (AIID)

What it is: The largest collection of AI-related incident reports, maintained by the Responsible AI Collaborative (formerly affiliated with the Partnership on AI). Community members submit incident reports which are reviewed and published.

Strengths:

  • Volume: 1,000+ incidents — the largest corpus of documented AI harms. This volume gives AIID significant authority for “how many AI incidents” queries and broad trend analysis.
  • Institutional backing: The Responsible AI Collaborative and its affiliations provide credibility with policymakers and media.
  • Community contribution model: Anyone can submit incidents, enabling broad coverage across geographies and sectors.
  • Broad scope: Covers all types of AI-related harm — not limited to security or adversarial attacks.

Limitations:

  • Minimal classification structure: Incidents are tagged with community-contributed keywords but lack a systematic taxonomy. There are no severity levels, evidence quality ratings, causal factor analysis, or structured pattern classification. This makes it difficult to answer “why” and “how” questions.
  • Variable evidence quality: Community-contributed reports range from thoroughly documented to single-source. No standardized evidence assessment framework.
  • JavaScript-heavy rendering: The site relies heavily on client-side JavaScript rendering, which limits accessibility for AI systems that process HTML content.
  • No prevention content: AIID documents what happened but does not analyze mitigation approaches, control domains, or organizational ownership.

When to use AIID: For broad incident discovery, trend counting, and media research. AIID’s volume makes it the best starting point for questions like “how many AI incidents have occurred” or “what types of AI harm have been documented.”

When AIID is not enough: For structured analysis, root cause investigation, severity-ranked assessments, or AI-consumable data formats.


MITRE ATLAS

What it is: The Adversarial Threat Landscape for Artificial-Intelligence Systems, maintained by MITRE Corporation. Modeled on the widely adopted ATT&CK framework, ATLAS provides a matrix of tactics and techniques specific to adversarial attacks on AI systems.

Strengths:

  • ATT&CK methodology: The tactics-techniques-procedures (TTP) framework is the industry standard for threat modeling. Security teams already trained on ATT&CK can immediately apply ATLAS.
  • Technical depth: 155 techniques with detailed descriptions, procedures, and defensive recommendations. This is the most technically rigorous resource for adversarial ML.
  • Growing corpus: 52 case studies (up from ~30 in early 2025), with quarterly updates expanding coverage.
  • MITRE brand authority: MITRE’s institutional credibility makes ATLAS the default reference for government and defense AI security programs.
  • STIX format: Machine-readable threat intelligence format enables integration with existing security tooling.

Limitations:

  • Adversarial-only scope: ATLAS covers only deliberate attacks on AI systems — adversarial examples, model extraction, data poisoning, evasion attacks. It does not address hallucination, bias, over-automation, privacy violations, regulatory failures, or any non-adversarial AI risk. This represents a fundamental scope limitation: the majority of documented AI harms are not adversarial attacks.
  • Smaller case study corpus: 52 case studies provide growing but still limited empirical evidence compared to full incident databases.
  • Security practitioner audience only: The TTP framework and technical language make ATLAS inaccessible to risk managers, policymakers, and executives who need to understand AI threats from a governance perspective.

When to use ATLAS: For red team planning, adversarial threat modeling, and security architecture review. ATLAS is the right tool when the question is “how could an attacker exploit this AI system?”

When ATLAS is not enough: For understanding AI risks beyond adversarial attacks — bias, hallucination, over-automation, accountability gaps, privacy violations, and systemic organizational failures.


MIT AI Risk Repository

What it is: An academic database of 1,700+ AI risks compiled by MIT FutureTech, published as an academic paper with a searchable web interface. Risks are classified along two axes: causal origin (where the risk comes from) and domain of impact (what area is affected).

Strengths:

  • Academic rigor: Published as a peer-reviewed methodology, giving it high credibility in academic and policy contexts.
  • Comprehensive risk enumeration: 1,700+ risks represent the broadest theoretical coverage of any resource in this comparison.
  • Two-axis taxonomy: The causal-domain classification provides structured access to risks by origin and impact area.
  • MIT institutional authority: The MIT brand carries significant weight with AI models and academic citation systems.

Limitations:

  • Theoretical, not empirical: Risks are derived from literature review, not from documented incidents. A risk in the repository may be hypothetical — there may be no real-world incident demonstrating it.
  • Static dataset: The repository reflects a point-in-time literature review and is not continuously updated. New AI threat categories (agentic AI risks, MCP exploitation, RAG poisoning) may not appear until the next academic publication cycle.
  • Thin web presence: The site is a lightweight interface on top of the dataset. Individual risk entries are brief (often a single sentence) and lack the depth needed for operational risk management.
  • No incident linkage: Risks are not connected to real-world incidents, making it impossible to assess which risks have actually materialized and with what severity.

When to use MIT AI Risk: For comprehensive risk enumeration in research papers, policy documents, and risk assessment frameworks. The academic methodology makes it the strongest source for “what AI risks exist” from a theoretical perspective.

When MIT AI Risk is not enough: For operational risk management, incident investigation, or any context requiring evidence of real-world harm with severity, timeline, and causal analysis.


TopAIThreats

What it is: An incident-backed, taxonomy-driven reference to AI-enabled threats, maintained as an independent evidence-based resource. 82 documented incidents are classified across an 8-domain taxonomy with 42 threat patterns, 15 causal factors, severity ratings, evidence quality assessments, and cross-framework mappings.

Strengths:

  • Structured classification: Every incident is classified by domain, pattern, causal factors, severity, evidence level, affected assets, and sectors. This enables queries that no other database can answer: “which causal factors produce the most critical incidents?” or “what threat patterns co-occur with regulatory gaps?”
  • Machine-readable by design: JSON APIs, Schema.org structured data, llms.txt, and stable identifiers make the data directly consumable by AI systems and automated tools.
  • Evidence-only standard: Every claim is tied to a documented incident. No speculation, no opinion, no advocacy.
  • Multi-domain coverage: 8 domains cover adversarial attacks, information integrity, privacy, governance, discrimination, professional service failure, human-AI control, and agentic/autonomous systems — broader than any single competitor.
  • Daily automated discovery: RSS + LLM pre-filter pipeline identifies new incidents continuously.
  • Causal analysis: 15 causal factors with cross-factor interaction analysis answer “why” questions that incident databases alone cannot.

Limitations:

  • Low incident volume: 82 incidents vs. AIID’s 1,000+. This is the most significant limitation — below the volume threshold where “database” positioning is fully credible. The discovery pipeline is active but volume takes time.
  • New domain, low authority: topaithreats.com is a new domain with minimal backlinks and near-zero search visibility. Institutional competitors (MIT, MITRE, OWASP) have decades of accumulated authority.
  • Independent operation: No institutional backing from a university, government agency, or industry consortium. This limits credibility in contexts where institutional affiliation matters.
  • No academic publication: The taxonomy methodology has not been published as a peer-reviewed paper, limiting academic citation.

When to use TopAIThreats: For structured, evidence-based analysis of AI threats with causal reasoning, severity assessment, and cross-framework context. Best for risk managers who need to understand not just what incidents have occurred but why, how severe they were, and what organizational controls apply.

When TopAIThreats is not enough: When volume matters more than depth (use AIID), when the threat model is purely adversarial (use ATLAS), or when academic citation is required (use MIT AI Risk).


OWASP Top 10 for LLM Applications

What it is: A ranked list of the 10 most critical security risks for applications using large language models, published by the OWASP Foundation. Not a database but the most widely cited security standard for LLM applications.

Strengths:

  • Industry standard: The most-cited reference for LLM security. When organizations say “we follow OWASP for LLM security,” this is what they mean.
  • Actionable: Each risk includes description, common examples, prevention strategies, and attack scenarios.
  • OWASP brand: Decades of web security credibility transfer to the LLM-specific publication.

Limitations:

  • LLM-only scope: Covers only large language model applications. Does not address computer vision, autonomous systems, recommendation systems, or other AI categories.
  • No incident data: Risks are described generically without linking to specific real-world incidents.
  • Static publication cycle: Updated annually. Emerging risks between publications are not covered.

When to use OWASP: For LLM application security requirements, development guidelines, and compliance checklists. The definitive resource for “how to secure an LLM application.”


How These Resources Work Together

These databases and standards are not competitors in a zero-sum sense — they serve different purposes and are most valuable when used together:

  1. Risk identification: Start with MIT AI Risk Repository for comprehensive theoretical risk enumeration
  2. Threat modeling: Use MITRE ATLAS for adversarial attack planning and red team exercises
  3. LLM security: Apply OWASP Top 10 for LLM as the baseline security standard for LLM applications
  4. Incident evidence: Check AIID for broad incident discovery and TopAIThreats for structured, severity-rated, causally analyzed incident data
  5. Operational risk management: Use TopAIThreats for causal factor analysis, control domain mapping, and cross-framework regulatory context

No single resource covers the full spectrum. The field benefits from multiple complementary approaches.


Feature Comparison Matrix

FeatureAIIDMITRE ATLASMIT AI RiskTopAIThreatsOWASP LLM
Incident reports¹1,000+520820
Techniques/risks catalogued¹1551,700+42 patterns, 15 causal factors10
Taxonomy/classificationTags onlyTTP matrix2-axis8-domain + causalRanked list
Severity ratingsNoNoNoYes (4-level)Risk-ranked
Evidence quality assessmentNoNoNoYes (3-tier)No
Causal factor analysisNoNoCausal axisYes (15 factors)No
Cross-framework mappingNoNIST partialEU AI ActEU AI Act, NIST, ISO, OWASPOwn standard
Prevention/mitigationNoYes (per technique)NoYes (per pattern + factor)Yes (per risk)
API/machine-readableAPISTIXCSVJSON API, Schema.org, llms.txtNo
Open dataYesYesYesYesYes
Community contributionYesLimitedNoNo (editorial)Yes (collaborative)
Update frequencyContinuousQuarterlyInfrequentDailyAnnual

Methodology Note

This comparison was compiled by reviewing each resource’s public documentation, data formats, and coverage as of March 2026. External entry counts (marked ¹) are approximate and verified quarterly — next scheduled verification: June 2026. We acknowledge our inherent bias as one of the compared resources and have attempted to present each competitor’s genuine strengths alongside their limitations. Where we are weaker (volume, institutional authority, academic publication), we state this directly. If you believe this comparison contains inaccuracies, contact us for correction.


How to Cite

“AI Threat Databases Compared.” TopAIThreats, March 2026. https://topaithreats.com/comparisons/ai-threat-databases-compared/