An incident-backed, control-mapped reference to the contributing factors behind documented AI-enabled threat incidents. Each factor is grounded in real-world evidence, mapped to organizational control domains, and linked to the incidents that demonstrate it.
The 15 factors are organized across four categories that map to organizational lifecycle phases: Malicious Misuse (operations and incident response), Design & Development (design and pre-deployment), Deployment & Integration (deployment and operations), and Systemic & Organizational (ongoing governance). Each factor profile includes a definition, incident signatures, control domains with likely ownership, documented incidents, and an assessment checklist.
Most incidents involve multiple factors across categories. These factors are not mutually exclusive — treat them as a matrix, not a flat list.
15 factors · 4 categories · 231 references across 97 incidents · Last updated: 2026-03-03
Causal Factors in Documented Incidents
Ranked by incident count. Click any factor to jump to its full profile.
If Intentional Fraud risks are high, also assess Social Engineering and Weaponization — incidents show these factors frequently co-occur in coordinated deception campaigns.
Assessment checklist
Are synthetic media detection tools deployed at identity verification checkpoints?
Do high-value transactions require multi-factor identity verification?
Is provenance tracking in place for AI-generated content?
Are staff trained on deepfake and AI-generated fraud indicators?
If Social Engineering risks are high, also assess Intentional Fraud and Weaponization — AI-enhanced social engineering often serves as the delivery mechanism for broader fraud schemes.
Assessment checklist
Do we have detection systems for synthetic voice and video in real-time communications?
Are out-of-band verification procedures established for sensitive requests?
Is social engineering awareness training conducted with AI-specific scenarios?
Do our communication filters detect AI-generated content patterns?
Deliberate adaptation or creation of AI tools specifically designed to cause harm, including cyberweapons, autonomous targeting systems, and purpose-built attack platforms.
How this factor appears in incidents
Safety-bypassed models fine-tuned to circumvent safety controls
AI-generated malware and exploit code for cyber operations
Offensive autonomous systems deployed for surveillance or attacks
Purpose-built attack tools marketed on underground forums (e.g., WormGPT)
AI-enhanced targeting for surveillance, reconnaissance, or kinetic operations
Associated assets & technologies
Large Language Models (7) Autonomous Agents (2) generative-image-models (1) social-media-platforms (1) code-generation-tools (1) agentic-ai-systems (1)
Control domains: Threat intelligence, Red teaming, Secure model interfaces
If Weaponization risks are high, also assess Adversarial Attack and Intentional Fraud — weaponized AI tools frequently exploit technical vulnerabilities to enable fraud at scale.
Assessment checklist
Do we monitor for fine-tuned models distributed through underground channels?
Are capability restrictions implemented on publicly available AI systems?
Do we coordinate with threat intelligence communities on emerging AI weapons?
Are legal frameworks established for prosecuting AI weaponization?
Technical exploitation of AI model vulnerabilities through crafted inputs designed to manipulate model behavior, extract training data, or cause misclassification.
How this factor appears in incidents
Crafted adversarial inputs causing systematic model misclassification
Training pipeline poisoning introducing backdoors into model weights
Model extraction through systematic querying of public endpoints
Security control evasion via adversarial perturbations of inputs
Confidence calibration exploits in high-stakes decision systems
Associated assets & technologies
Large Language Models (2) Autonomous Agents (1) Content Platforms (1)
Control domains: Application security, Robustness testing, Input validation
If Adversarial Attack risks are high, also assess Prompt Injection Vulnerability and Weaponization — technical model exploits are often the entry point for broader attack chains.
Assessment checklist
Is adversarial robustness testing part of our model evaluation pipeline?
Are input validation and anomaly detection deployed on model interfaces?
Do we use ensemble approaches to reduce single-model vulnerability?
Do we monitor for systematic probing patterns indicating extraction attempts?
Deployment of AI systems without adequate testing for failure modes, edge cases, bias, or harmful outputs across the range of real-world conditions they will encounter.
How this factor appears in incidents
Predictable edge-case failures that pre-deployment testing should have caught
Post-deployment harm discovery from untested real-world scenarios
Missing high-risk evaluations for foreseeable harmful use cases
Narrow benchmark reliance instead of real-world condition testing
Known failure deprioritization before product launch under time pressure
Associated assets & technologies
Large Language Models (19) Autonomous Agents (9) Decision Automation (8) Foundation Models (4) Content Platforms (4) Industrial Control Systems (3)
Control domains: Model evaluation, Red teaming, QA / risk acceptance
If Insufficient Safety Testing risks are high, also assess Model Opacity and Training Data Bias — untested systems with opaque reasoning and biased data produce the most harmful outcomes.
Assessment checklist
Has pre-deployment red-team testing been conducted across documented risk categories?
Are minimum safety evaluation standards established proportional to deployment risk?
Is staged rollout with monitoring gates implemented before broad availability?
Have third-party safety audits been completed for high-risk applications?
Systematic errors in AI outputs caused by biased, unrepresentative, or historically discriminatory training data that encodes and amplifies societal inequities.
How this factor appears in incidents
Demographic underperformance for specific population groups or protected classes
Historical discrimination encoding reproduced and amplified in model outputs
Bias amplification loops reinforcing existing inequities through feedback cycles
Disproportionate error rates across protected characteristics in decision systems
Unrepresentative training data missing or undersampling affected populations
Associated assets & technologies
Decision Automation (8) Content Platforms (3) Recommender Systems (2) Large Language Models (2) Financial Systems (1) Foundation Models (1)
Control domains: Data governance, Fairness & ethics, Data quality
If Training Data Bias risks are high, also assess Model Opacity and Insufficient Safety Testing — biased models that cannot be audited and were inadequately tested produce systematic discrimination.
Assessment checklist
Has demographic parity analysis been conducted across model outputs before deployment?
Is ongoing bias monitoring implemented with disaggregated performance metrics?
Is training data curated with explicit attention to representativeness and historical bias?
Are feedback mechanisms established for affected communities to report bias?
Inability to understand, audit, or explain how an AI system reaches its decisions, creating accountability gaps and preventing meaningful oversight or contestation.
How this factor appears in incidents
Unexplainable decisions affecting individuals' rights or life outcomes
Unauditable reasoning when errors or downstream harms are discovered
High-stakes deployment without interpretability or explainability requirements
Blocked appeals processes unable to address the basis of automated decisions
Regulatory examination failure when decision logic cannot be inspected
Associated assets & technologies
Decision Automation (8) Financial Systems (1) Large Language Models (1) Foundation Models (1) Autonomous Agents (1) Recommender Systems (1)
Control domains: Explainability, Model documentation, Audit trails
If Model Opacity risks are high, also assess Accountability Vacuum and Insufficient Safety Testing — opaque systems without clear accountability create the conditions for undetected and uncontested harm.
Assessment checklist
Are explainability standards defined proportional to decision impact for this system?
Are model documentation practices (model cards, system cards) in place at deployment?
Are affected individuals provided meaningful explanations of automated decisions?
Are audit trails maintained that enable post-hoc review of model decision factors?
Inherent tendency of generative AI models to produce confident but factually incorrect, fabricated, or misleading outputs that users may trust as authoritative.
How this factor appears in incidents
Fabricated information presented confidently as factual content
Non-existent citations in legal filings, academic papers, or official reports
Confident out-of-scope outputs beyond the model's reliable knowledge boundary
Decision-making on hallucinations by users or organizations trusting model output
Fabrication propagation through downstream systems without verification
Associated assets & technologies
Large Language Models (9) Content Platforms (4) Decision Automation (1)
Control domains: Output verification, RAG architecture, Human review processes
If Hallucination Tendency risks are high, also assess Over-Automation and Insufficient Safety Testing — hallucinated outputs cause the most harm when automated systems act on them without human review.
Assessment checklist
Is retrieval-augmented generation implemented to ground outputs in verified sources?
Are output verification layers deployed for high-stakes applications?
Are model limitations and confidence levels communicated to end users?
Are human review requirements established for AI-generated content in critical contexts?
Exploitation of language model architectures where untrusted input can override system instructions, extract confidential prompts, or hijack model behavior.
How this factor appears in incidents
System prompt extraction through conversational manipulation techniques
Instruction override via embedded commands in user-supplied input
Data exfiltration through crafted conversational flows and tool calls
Safety guardrail bypass through jailbreak and context-switching techniques
Cross-tool injection in agentic systems with plugin or MCP access
Associated assets & technologies
Large Language Models (10) Autonomous Agents (7) Training Datasets (1) Content Platforms (1)
If Prompt Injection risks are high, also assess Adversarial Attack and Inadequate Access Controls — prompt injection exploits are most damaging when access controls fail to limit what the model can reach.
Assessment checklist
Have we implemented strict separation of system prompts, user content, and tool I/O?
Are prompt injection detection layers deployed before model processing?
Are LLM tools and connectors restricted to minimal required data and capabilities?
Has the application been red-teamed for indirect and cross-tool prompt injection?
If Inadequate Access Controls risks are high, also assess Misconfigured Deployment and Prompt Injection Vulnerability — access failures are most exploitable when deployment configurations expose sensitive capabilities.
Assessment checklist
Is access tiered by user authorization level and use case?
Are data loss prevention measures applied to AI system outputs?
Is age and identity verification in place for AI capabilities with harm potential?
Are API rate limiting and access logs reviewed for anomalous use patterns?
AI systems deployed with incorrect settings, inappropriate scope, or mismatched configurations that create unintended exposure, capability gaps, or operational failures.
How this factor appears in incidents
Out-of-scope operation beyond the system's intended domain or use case
Default configuration exposure of sensitive capabilities left open
Disabled safety guardrails that were available but not enabled for this deployment
Unintended data flows from integration errors between connected systems
Missing configuration review for the specific deployment environment
Associated assets & technologies
Large Language Models (7) Training Datasets (3) Autonomous Agents (2) Decision Automation (2) Industrial Control Systems (1) Content Platforms (1)
Control domains: Secure configuration, Change management, Release governance
If Misconfigured Deployment risks are high, also assess Inadequate Access Controls and Insufficient Safety Testing — configuration errors are most damaging when access controls and pre-deployment testing also fail.
Assessment checklist
Is a deployment checklist covering security, privacy, and safety configurations in use?
Is environment-specific configuration review required before production deployment?
Are default-deny configurations implemented requiring explicit capability enabling?
Is post-deployment verification conducted to confirm settings match intended design?
If Over-Automation risks are high, also assess Accountability Vacuum and Hallucination Tendency — fully automated systems without clear accountability and with hallucination-prone models are where the worst harms accumulate.
Assessment checklist
Is a documented and meaningful human override preserved for each high-stakes automated decision?
Are clear boundaries established for AI decision authority versus human authority?
Are circuit breakers implemented that escalate to human review on uncertainty or anomaly?
Do affected individuals have an accessible appeals process for automated decisions?
Absence or inadequacy of legal frameworks, enforcement mechanisms, or regulatory standards governing the development, deployment, or use of AI systems in specific contexts.
How this factor appears in incidents
Unregulated sector deployment with no applicable AI-specific regulation
Cross-jurisdictional exploitation to avoid regulatory requirements
Regulatory lag behind rapid AI capability development
Missing enforcement mechanisms for existing AI guidelines and standards
Novel harm blindness in existing regulatory frameworks
Associated assets & technologies
Large Language Models (6) Content Platforms (6) Training Datasets (6) Decision Automation (6) Foundation Models (4) Autonomous Agents (3)
Control domains: Legal compliance, Regulatory monitoring, Policy
If Regulatory Gap risks are high, also assess Accountability Vacuum and Competitive Pressure — absent regulation combined with competitive pressure and unclear accountability creates conditions for systematic harm.
Assessment checklist
Has a regulatory gap analysis been performed for each AI product deployment?
Are voluntary governance standards implemented ahead of regulatory requirements?
Do we participate in relevant industry standard-setting bodies?
Is regulatory monitoring in place for AI governance developments in all jurisdictions?
If Competitive Pressure risks are high, also assess Insufficient Safety Testing and Regulatory Gap — market pressure most directly erodes safety testing, especially in unregulated domains.
Assessment checklist
Are pre-deployment safety gates established that cannot be overridden by commercial timelines?
Is safety evaluation built into development timelines as a required phase?
Do we support industry-wide safety standards that level the competitive playing field?
Do organizational incentives reward responsible development alongside capability?
Diffusion or absence of clear responsibility for AI system outcomes across the development, deployment, and use chain, leaving harmed parties without recourse.
How this factor appears in incidents
No identifiable responsible party when AI systems cause harm
Responsibility shifting between developers, deployers, and users
Liability disclaimers in terms of service for AI-mediated decisions
Obscured deployment authorization hiding who approved harmful AI use
No recourse pathway for individuals harmed by AI system outputs
Associated assets & technologies
Large Language Models (5) Decision Automation (5) Autonomous Agents (4) Industrial Control Systems (3) Training Datasets (3) Biometric Data (2)
Control domains: RACI, Incident response, Liability and escalation
If Accountability Vacuum risks are high, also assess Regulatory Gap and Model Opacity — diffuse responsibility is most harmful when regulation is absent and model decisions cannot be explained.
Assessment checklist
Are clear chains of responsibility defined across the AI lifecycle for this system?
Are AI impact assessments conducted that assign accountability for foreseeable harms?
Is a responsible officer designated for each AI deployment?
Are accessible recourse mechanisms established for individuals harmed by AI systems?
Causal factors are assigned during incident classification based on evidence analysis. A single incident typically involves multiple contributing factors that interact to enable or amplify harm. Factor definitions and assignments follow the TopAIThreats methodology. Stable identifiers (CAUSE-001 through CAUSE-015) are permanent and never reused.