Skip to main content
TopAIThreats home TOP AI THREATS

OWASP Top 10 for LLM Applications: Mapping to Real-World AI Threats

Last updated: 2026-03-12

Why This Mapping Exists

The OWASP Top 10 for LLM Applications is the most widely cited security standard for LLM-based systems. It identifies ten critical vulnerability categories that developers and security teams must address. However, OWASP describes vulnerabilities in isolation — it does not connect them to documented real-world incidents, broader threat domains, or root causes.

This mapping bridges that gap. For each OWASP LLM vulnerability, we identify the corresponding TopAIThreats threat patterns, link to documented incidents, and trace the underlying causal factors that enable exploitation. The result is a practitioner reference that connects security standards to empirical threat intelligence.

This page references the OWASP Top 10 for LLM Applications v2025 (published August 2025).


Complete Mapping

Each OWASP vulnerability is mapped to a primary TopAIThreats domain, but many have real-world impacts that cross into other domains. The primary domain reflects where the threat originates; secondary domains in the analysis below show where the harm materialises. A single OWASP vulnerability can trigger risks across multiple domains.

LLM01: Prompt Injection

OWASP definition: Manipulation of LLM behavior through crafted inputs that override or redirect model instructions, including direct injection (user input) and indirect injection (via external data sources).

TopAIThreats mapping:

Why it matters beyond OWASP: OWASP treats prompt injection as a single vulnerability. The TopAIThreats taxonomy reveals it operates across multiple threat domains — a prompt injection attack on an agentic AI system (DOM-AGT) creates different risks than one targeting a content generation system (DOM-INF). The causal factor analysis shows that prompt injection is enabled by architectural decisions (mixing instructions and data in the same channel) that cannot be fully mitigated by input filtering alone.

Documented incidents: See prompt injection incidents for severity-rated examples with causal analysis.


LLM02: Sensitive Information Disclosure

OWASP definition: Unintended exposure of sensitive data through LLM outputs, including training data extraction, PII leakage, and proprietary information disclosure.

TopAIThreats mapping:

Why it matters beyond OWASP: Information disclosure in LLMs is not just a confidentiality issue — it intersects with privacy regulation (GDPR Article 17 right to erasure creates tension with model memorization, an area of ongoing regulatory and technical debate), discrimination law (if disclosed attributes reveal protected characteristics), and competitive intelligence (extraction of fine-tuning data representing trade secrets).


LLM03: Supply Chain Vulnerabilities

OWASP definition: Risks from third-party components including pre-trained models, training data, and deployment platforms that introduce vulnerabilities, backdoors, or poisoned data.

TopAIThreats mapping:

Why it matters beyond OWASP: Supply chain risks in AI extend beyond traditional software vulnerabilities. A poisoned pre-trained model can introduce biases that persist through fine-tuning, affecting fairness outcomes (DOM-SOC) in ways that standard security scanning cannot detect. The causal factor analysis identifies that insufficient safety testing at the integration boundary — where pre-trained models meet production data — is the root enabler.


LLM04: Data and Model Poisoning

OWASP definition: Manipulation of training data or fine-tuning processes to introduce vulnerabilities, backdoors, or biased behaviors into the model.

TopAIThreats mapping:

Why it matters beyond OWASP: OWASP groups data poisoning and model poisoning together. The TopAIThreats taxonomy distinguishes between adversarial data poisoning (intentional, DOM-SEC) and training data bias (systemic, enabling DOM-SOC discrimination harms). This distinction matters because the mitigations differ: adversarial poisoning requires integrity verification and anomaly detection, while bias requires representative data curation and fairness testing.


LLM05: Improper Output Handling

OWASP definition: Insufficient validation, sanitization, or handling of LLM outputs before passing them to downstream components, enabling injection attacks, code execution, or unintended actions.

TopAIThreats mapping:

Why it matters beyond OWASP: Improper output handling becomes critical in agentic AI systems where LLM outputs directly execute actions — API calls, database queries, file operations. The TopAIThreats agentic domain (DOM-AGT) documents how output handling failures in multi-agent systems can cascade, with one agent’s hallucinated output becoming another agent’s trusted input.


LLM06: Excessive Agency

OWASP definition: LLM-based systems granted excessive functionality, permissions, or autonomy, enabling unintended actions with real-world impact.

TopAIThreats mapping:

Why it matters beyond OWASP: OWASP frames excessive agency as a permissions problem. The TopAIThreats taxonomy reveals it as a human-AI control problem (DOM-CTL) intersecting with agentic autonomy risks (DOM-AGT). The causal factor “over-automation” identifies the organizational pattern: teams grant AI systems broad permissions for convenience, creating a systemic vulnerability that cannot be addressed by technical controls alone.


LLM07: System Prompt Leakage

OWASP definition: Risk that system prompts containing sensitive business logic, safety instructions, or access credentials are exposed to users through prompt extraction techniques.

TopAIThreats mapping:

Why it matters beyond OWASP: System prompt leakage is a prerequisite for many other attacks. Once an attacker knows the system prompt, adversarial evasion becomes significantly easier because safety instructions can be specifically targeted. The TopAIThreats incident database documents cases where prompt leakage preceded more severe exploitation.


LLM08: Vector and Embedding Weaknesses

OWASP definition: Vulnerabilities in RAG (Retrieval-Augmented Generation) systems and vector databases, including poisoned embeddings, unauthorized access to vector stores, and embedding inversion attacks.

TopAIThreats mapping:

Why it matters beyond OWASP: RAG poisoning represents a new attack surface that traditional security frameworks do not address. An attacker who can inject content into a vector database can influence LLM outputs without touching the model itself. The TopAIThreats taxonomy classifies this under both Security & Cyber (the attack mechanism) and Information Integrity (the resulting misinformation risk), reflecting its cross-domain impact.


LLM09: Misinformation

OWASP definition: LLM generation of false, misleading, or fabricated content (hallucinations) that users may trust and act upon.

TopAIThreats mapping:

Why it matters beyond OWASP: OWASP treats misinformation as a single output quality issue. The TopAIThreats taxonomy reveals a spectrum from individual hallucinations (single factual errors) to consensus reality erosion (systemic degradation of reliable information). The causal factor analysis identifies hallucination tendency as an inherent architectural limitation of current LLMs — not a bug to be fixed but a characteristic to be managed through verification pipelines and human oversight.


LLM10: Unbounded Consumption

OWASP definition: LLM applications consuming excessive computational resources through denial-of-service attacks, resource-intensive queries, or uncontrolled recursive operations.

TopAIThreats mapping:

Why it matters beyond OWASP: Resource exhaustion attacks on AI systems can have cascading effects beyond the targeted application. When AI inference infrastructure is shared (as in cloud deployments), unbounded consumption by one application can degrade service for others — a systemic risk (DOM-SYS) that OWASP’s application-level framing does not capture.


Cross-Reference Summary

OWASP LLMPrimary TopAIThreats PatternDomainKey Causal Factor
LLM01: Prompt InjectionAdversarial EvasionSecurity & CyberPrompt Injection Vulnerability
LLM02: Sensitive Info DisclosureModel Inversion & Data ExtractionSecurity & CyberInadequate Access Controls
LLM03: Supply ChainData PoisoningSecurity & CyberInsufficient Safety Testing
LLM04: Data & Model PoisoningData PoisoningSecurity & CyberTraining Data Bias
LLM05: Improper Output HandlingTool Misuse & Privilege EscalationAgentic & AutonomousMisconfigured Deployment
LLM06: Excessive AgencyTool Misuse & Privilege EscalationAgentic & AutonomousOver-Automation
LLM07: System Prompt LeakageModel Inversion & Data ExtractionSecurity & CyberPrompt Injection Vulnerability
LLM08: Vector & EmbeddingData PoisoningSecurity & CyberInadequate Access Controls
LLM09: MisinformationMisinformation & Hallucinated ContentInformation IntegrityHallucination Tendency
LLM10: Unbounded ConsumptionAutomated Vulnerability DiscoverySecurity & CyberMisconfigured Deployment

Key insight: 7 of 10 OWASP LLM vulnerabilities map to the Security & Cyber domain (DOM-SEC), reflecting OWASP’s security-focused perspective. However, the TopAIThreats taxonomy shows that the real-world impacts of these vulnerabilities frequently cross into other domains — Privacy (LLM02), Discrimination (LLM03/04), Human-AI Control (LLM06), Information Integrity (LLM09), and Systemic Risk (LLM10). A security-only response to these vulnerabilities misses the broader risk picture.


What OWASP Does Not Cover

The OWASP Top 10 for LLM Applications is an excellent security standard, but it addresses only one dimension of AI risk. Threat domains not covered by OWASP include:

  • Discrimination & Social Harm (DOM-SOC) — algorithmic bias, discriminatory outcomes, representational harm
  • Economic & Labor (DOM-ECO) — workforce displacement, market manipulation, power concentration
  • Privacy & Surveillance (DOM-PRI) — mass surveillance, behavioral profiling, biometric exploitation
  • Human–AI Control (DOM-CTL) — automation bias, loss of human agency, deceptive interfaces
  • Systemic & Catastrophic (DOM-SYS) — infrastructure collapse, existential risk, weapons applications

Organizations that rely solely on OWASP for AI risk management have significant blind spots in non-security threat domains. The TopAIThreats taxonomy provides the broader threat landscape context that security standards alone cannot offer.


How to Use This Mapping

For AppSec teams: Start with OWASP as your LLM security checklist, then use this mapping to understand the broader risk implications of each vulnerability — which threat domains are affected, what incidents have occurred, and what causal factors enable exploitation.

For risk managers: Use this mapping to translate between security (OWASP) and risk management (TopAIThreats) vocabularies. When security teams report “LLM01 vulnerability,” this page tells you it maps to adversarial evasion risks with documented incidents and organizational causal factors.

For compliance teams: Cross-reference this mapping with the AI Governance Frameworks Compared page to understand how OWASP vulnerabilities relate to regulatory requirements under the EU AI Act, NIST AI RMF, and ISO 42001.


Methodology Note

This mapping references the OWASP Top 10 for LLM Applications v2025 (published August 2025) and the TopAIThreats taxonomy v3.0 (March 2026). Pattern and causal factor mappings are maintained by the TopAIThreats editorial team and updated when either OWASP or the taxonomy is revised. For example, the v2025 OWASP release renamed and reordered several entries from the 2023 version — mappings were updated at that time to reflect the new vulnerability names and scope changes. This is not an official OWASP publication — it is an independent cross-reference maintained by TopAIThreats. If you believe this mapping contains inaccuracies, contact us for correction.