Skip to main content
TopAIThreats home TOP AI THREATS

Causal Factors

An incident-backed, control-mapped reference to the contributing factors behind documented AI-enabled threat incidents. Each factor is grounded in real-world evidence, mapped to organizational control domains, and linked to the incidents that demonstrate it.

The 15 factors are organized across four categories that map to organizational lifecycle phases: Malicious Misuse (operations and incident response), Design & Development (design and pre-deployment), Deployment & Integration (deployment and operations), and Systemic & Organizational (ongoing governance). Each factor profile includes a definition, incident signatures, control domains with likely ownership, documented incidents, and an assessment checklist.

Most incidents involve multiple factors across categories. These factors are not mutually exclusive — treat them as a matrix, not a flat list.

15 factors · 4 categories · 231 references across 97 incidents · Last updated: 2026-03-03

Causal Factors in Documented Incidents

Ranked by incident count. Click any factor to jump to its full profile.

Code Factor Incidents
CAUSE-006 Insufficient Safety Testing 34
CAUSE-009 Inadequate Access Controls 27
CAUSE-013 Regulatory Gap 21
CAUSE-010 Over-Automation 18
CAUSE-014 Accountability Vacuum 16
CAUSE-001 Intentional Fraud 15
CAUSE-005 Training Data Bias 13
CAUSE-015 Competitive Pressure 12
CAUSE-012 Misconfigured Deployment 11
CAUSE-004 Social Engineering 10
CAUSE-008 Model Opacity 10
CAUSE-011 Prompt Injection Vulnerability 10
CAUSE-003 Weaponization 9
CAUSE-007 Hallucination Tendency 9
CAUSE-002 Adversarial Attack 2
Export: ·

Malicious Misuse

36 incident references across 4 factors

CAUSE-001

Intentional Fraud (15 incidents)

Deliberate use of AI capabilities to deceive, impersonate, or defraud individuals and organizations for financial, political, or personal gain.

How this factor appears in incidents

  • Synthetic media impersonation in video or audio calls targeting executives
  • Fraudulent AI-generated content passed off as authentic documents or communications
  • Automated phishing at scale using language models for personalization
  • Financial fraud schemes exploiting AI-generated credibility signals
  • Identity manipulation to bypass verification and authentication systems

Associated assets & technologies

Voice Synthesis (9) Identity Credentials (6) Content Platforms (5) Large Language Models (3) Biometric Data (2) Foundation Models (2)

Control domains: Fraud controls, KYC / identity verification, Customer verification

Likely owner: Fraud / Security / Ops

Documented incidents (15) — showing top 10

View all 15 incidents involving Intentional Fraud →

Related causal factors

If Intentional Fraud risks are high, also assess Social Engineering and Weaponization — incidents show these factors frequently co-occur in coordinated deception campaigns.

Assessment checklist

  • Are synthetic media detection tools deployed at identity verification checkpoints?
  • Do high-value transactions require multi-factor identity verification?
  • Is provenance tracking in place for AI-generated content?
  • Are staff trained on deepfake and AI-generated fraud indicators?
CAUSE-004

Social Engineering (10 incidents)

Use of AI to craft, personalize, or scale social engineering attacks that exploit human trust, authority, or emotional responses.

How this factor appears in incidents

  • Voice or video impersonation of authority figures using generative AI
  • Personalized phishing at scale with language model-generated messages
  • Trust exploitation in AI-mediated communications and chatbots
  • Emotional manipulation through synthetic content targeting vulnerable individuals
  • Automated victim profiling for precision-targeted social attacks

Associated assets & technologies

Voice Synthesis (8) Identity Credentials (6) Content Platforms (3) Biometric Data (2) Large Language Models (2)

Control domains: Employee training, Communication security, Verification procedures

Likely owner: Security / Ops

Documented incidents (10)

Related causal factors

If Social Engineering risks are high, also assess Intentional Fraud and Weaponization — AI-enhanced social engineering often serves as the delivery mechanism for broader fraud schemes.

Assessment checklist

  • Do we have detection systems for synthetic voice and video in real-time communications?
  • Are out-of-band verification procedures established for sensitive requests?
  • Is social engineering awareness training conducted with AI-specific scenarios?
  • Do our communication filters detect AI-generated content patterns?
CAUSE-003

Weaponization (9 incidents)

Deliberate adaptation or creation of AI tools specifically designed to cause harm, including cyberweapons, autonomous targeting systems, and purpose-built attack platforms.

How this factor appears in incidents

  • Safety-bypassed models fine-tuned to circumvent safety controls
  • AI-generated malware and exploit code for cyber operations
  • Offensive autonomous systems deployed for surveillance or attacks
  • Purpose-built attack tools marketed on underground forums (e.g., WormGPT)
  • AI-enhanced targeting for surveillance, reconnaissance, or kinetic operations

Associated assets & technologies

Large Language Models (7) Autonomous Agents (2) generative-image-models (1) social-media-platforms (1) code-generation-tools (1) agentic-ai-systems (1)

Control domains: Threat intelligence, Red teaming, Secure model interfaces

Likely owner: Security / Threat Intel

Documented incidents (9)

Related causal factors

If Weaponization risks are high, also assess Adversarial Attack and Intentional Fraud — weaponized AI tools frequently exploit technical vulnerabilities to enable fraud at scale.

Assessment checklist

  • Do we monitor for fine-tuned models distributed through underground channels?
  • Are capability restrictions implemented on publicly available AI systems?
  • Do we coordinate with threat intelligence communities on emerging AI weapons?
  • Are legal frameworks established for prosecuting AI weaponization?
CAUSE-002

Adversarial Attack (2 incidents)

Technical exploitation of AI model vulnerabilities through crafted inputs designed to manipulate model behavior, extract training data, or cause misclassification.

How this factor appears in incidents

  • Crafted adversarial inputs causing systematic model misclassification
  • Training pipeline poisoning introducing backdoors into model weights
  • Model extraction through systematic querying of public endpoints
  • Security control evasion via adversarial perturbations of inputs
  • Confidence calibration exploits in high-stakes decision systems

Associated assets & technologies

Large Language Models (2) Autonomous Agents (1) Content Platforms (1)

Control domains: Application security, Robustness testing, Input validation

Likely owner: AppSec / AI Platform

Documented incidents (2)
ID Title Severity
INC-25-0001 AI-Orchestrated Cyber Espionage Campaign Against Critical Infrastructure critical
INC-16-0002 Microsoft Tay Twitter Chatbot Adversarial Manipulation high

Related causal factors

If Adversarial Attack risks are high, also assess Prompt Injection Vulnerability and Weaponization — technical model exploits are often the entry point for broader attack chains.

Assessment checklist

  • Is adversarial robustness testing part of our model evaluation pipeline?
  • Are input validation and anomaly detection deployed on model interfaces?
  • Do we use ensemble approaches to reduce single-model vulnerability?
  • Do we monitor for systematic probing patterns indicating extraction attempts?

Design & Development

66 incident references across 4 factors

CAUSE-006

Insufficient Safety Testing (34 incidents)

Deployment of AI systems without adequate testing for failure modes, edge cases, bias, or harmful outputs across the range of real-world conditions they will encounter.

How this factor appears in incidents

  • Predictable edge-case failures that pre-deployment testing should have caught
  • Post-deployment harm discovery from untested real-world scenarios
  • Missing high-risk evaluations for foreseeable harmful use cases
  • Narrow benchmark reliance instead of real-world condition testing
  • Known failure deprioritization before product launch under time pressure

Associated assets & technologies

Large Language Models (19) Autonomous Agents (9) Decision Automation (8) Foundation Models (4) Content Platforms (4) Industrial Control Systems (3)

Control domains: Model evaluation, Red teaming, QA / risk acceptance

Likely owner: AI Safety / Product / Security

Documented incidents (34) — showing top 10

View all 34 incidents involving Insufficient Safety Testing →

Related causal factors

If Insufficient Safety Testing risks are high, also assess Model Opacity and Training Data Bias — untested systems with opaque reasoning and biased data produce the most harmful outcomes.

Assessment checklist

  • Has pre-deployment red-team testing been conducted across documented risk categories?
  • Are minimum safety evaluation standards established proportional to deployment risk?
  • Is staged rollout with monitoring gates implemented before broad availability?
  • Have third-party safety audits been completed for high-risk applications?
CAUSE-005

Training Data Bias (13 incidents)

Systematic errors in AI outputs caused by biased, unrepresentative, or historically discriminatory training data that encodes and amplifies societal inequities.

How this factor appears in incidents

  • Demographic underperformance for specific population groups or protected classes
  • Historical discrimination encoding reproduced and amplified in model outputs
  • Bias amplification loops reinforcing existing inequities through feedback cycles
  • Disproportionate error rates across protected characteristics in decision systems
  • Unrepresentative training data missing or undersampling affected populations

Associated assets & technologies

Decision Automation (8) Content Platforms (3) Recommender Systems (2) Large Language Models (2) Financial Systems (1) Foundation Models (1)

Control domains: Data governance, Fairness & ethics, Data quality

Likely owner: Data / Responsible AI

Documented incidents (13) — showing top 10

View all 13 incidents involving Training Data Bias →

Related causal factors

If Training Data Bias risks are high, also assess Model Opacity and Insufficient Safety Testing — biased models that cannot be audited and were inadequately tested produce systematic discrimination.

Assessment checklist

  • Has demographic parity analysis been conducted across model outputs before deployment?
  • Is ongoing bias monitoring implemented with disaggregated performance metrics?
  • Is training data curated with explicit attention to representativeness and historical bias?
  • Are feedback mechanisms established for affected communities to report bias?
CAUSE-008

Model Opacity (10 incidents)

Inability to understand, audit, or explain how an AI system reaches its decisions, creating accountability gaps and preventing meaningful oversight or contestation.

How this factor appears in incidents

  • Unexplainable decisions affecting individuals' rights or life outcomes
  • Unauditable reasoning when errors or downstream harms are discovered
  • High-stakes deployment without interpretability or explainability requirements
  • Blocked appeals processes unable to address the basis of automated decisions
  • Regulatory examination failure when decision logic cannot be inspected

Associated assets & technologies

Decision Automation (8) Financial Systems (1) Large Language Models (1) Foundation Models (1) Autonomous Agents (1) Recommender Systems (1)

Control domains: Explainability, Model documentation, Audit trails

Likely owner: AI Safety / Product

Documented incidents (10)

Related causal factors

If Model Opacity risks are high, also assess Accountability Vacuum and Insufficient Safety Testing — opaque systems without clear accountability create the conditions for undetected and uncontested harm.

Assessment checklist

  • Are explainability standards defined proportional to decision impact for this system?
  • Are model documentation practices (model cards, system cards) in place at deployment?
  • Are affected individuals provided meaningful explanations of automated decisions?
  • Are audit trails maintained that enable post-hoc review of model decision factors?
CAUSE-007

Hallucination Tendency (9 incidents)

Inherent tendency of generative AI models to produce confident but factually incorrect, fabricated, or misleading outputs that users may trust as authoritative.

How this factor appears in incidents

  • Fabricated information presented confidently as factual content
  • Non-existent citations in legal filings, academic papers, or official reports
  • Confident out-of-scope outputs beyond the model's reliable knowledge boundary
  • Decision-making on hallucinations by users or organizations trusting model output
  • Fabrication propagation through downstream systems without verification

Associated assets & technologies

Large Language Models (9) Content Platforms (4) Decision Automation (1)

Control domains: Output verification, RAG architecture, Human review processes

Likely owner: AI Safety / Product

Documented incidents (9)

Related causal factors

If Hallucination Tendency risks are high, also assess Over-Automation and Insufficient Safety Testing — hallucinated outputs cause the most harm when automated systems act on them without human review.

Assessment checklist

  • Is retrieval-augmented generation implemented to ground outputs in verified sources?
  • Are output verification layers deployed for high-stakes applications?
  • Are model limitations and confidence levels communicated to end users?
  • Are human review requirements established for AI-generated content in critical contexts?

Deployment & Integration

66 incident references across 4 factors

CAUSE-011

Prompt Injection Vulnerability (10 incidents)

Exploitation of language model architectures where untrusted input can override system instructions, extract confidential prompts, or hijack model behavior.

How this factor appears in incidents

  • System prompt extraction through conversational manipulation techniques
  • Instruction override via embedded commands in user-supplied input
  • Data exfiltration through crafted conversational flows and tool calls
  • Safety guardrail bypass through jailbreak and context-switching techniques
  • Cross-tool injection in agentic systems with plugin or MCP access

Associated assets & technologies

Large Language Models (10) Autonomous Agents (7) Training Datasets (1) Content Platforms (1)

Control domains: Application security, LLM-specific security testing, Agent / tool sandboxing

Likely owner: AppSec / AI Platform

Documented incidents (10)

Related causal factors

If Prompt Injection risks are high, also assess Adversarial Attack and Inadequate Access Controls — prompt injection exploits are most damaging when access controls fail to limit what the model can reach.

Assessment checklist

  • Have we implemented strict separation of system prompts, user content, and tool I/O?
  • Are prompt injection detection layers deployed before model processing?
  • Are LLM tools and connectors restricted to minimal required data and capabilities?
  • Has the application been red-teamed for indirect and cross-tool prompt injection?
CAUSE-009

Inadequate Access Controls (27 incidents)

Insufficient restrictions on who can access, use, or extract data from AI systems, enabling unauthorized use, data exposure, or capability misuse.

How this factor appears in incidents

  • Sensitive data exposure through AI interfaces without proper authorization
  • Unintended capability access by unauthorized user populations
  • Missing age verification on AI capabilities with potential for harm
  • Unauthenticated API endpoints exposing model interfaces to the public
  • Training data leakage extractable through normal conversational interaction

Associated assets & technologies

Large Language Models (22) Autonomous Agents (6) Content Platforms (6) Training Datasets (5) Foundation Models (3) code-generation-tools (1)

Control domains: Identity & access management, Data loss prevention, API security

Likely owner: Security / Platform

Documented incidents (27) — showing top 10

View all 27 incidents involving Inadequate Access Controls →

Related causal factors

If Inadequate Access Controls risks are high, also assess Misconfigured Deployment and Prompt Injection Vulnerability — access failures are most exploitable when deployment configurations expose sensitive capabilities.

Assessment checklist

  • Is access tiered by user authorization level and use case?
  • Are data loss prevention measures applied to AI system outputs?
  • Is age and identity verification in place for AI capabilities with harm potential?
  • Are API rate limiting and access logs reviewed for anomalous use patterns?
CAUSE-012

Misconfigured Deployment (11 incidents)

AI systems deployed with incorrect settings, inappropriate scope, or mismatched configurations that create unintended exposure, capability gaps, or operational failures.

How this factor appears in incidents

  • Out-of-scope operation beyond the system's intended domain or use case
  • Default configuration exposure of sensitive capabilities left open
  • Disabled safety guardrails that were available but not enabled for this deployment
  • Unintended data flows from integration errors between connected systems
  • Missing configuration review for the specific deployment environment

Associated assets & technologies

Large Language Models (7) Training Datasets (3) Autonomous Agents (2) Decision Automation (2) Industrial Control Systems (1) Content Platforms (1)

Control domains: Secure configuration, Change management, Release governance

Likely owner: SRE / Infra / DevOps

Documented incidents (11) — showing top 10

View all 11 incidents involving Misconfigured Deployment →

Related causal factors

If Misconfigured Deployment risks are high, also assess Inadequate Access Controls and Insufficient Safety Testing — configuration errors are most damaging when access controls and pre-deployment testing also fail.

Assessment checklist

  • Is a deployment checklist covering security, privacy, and safety configurations in use?
  • Is environment-specific configuration review required before production deployment?
  • Are default-deny configurations implemented requiring explicit capability enabling?
  • Is post-deployment verification conducted to confirm settings match intended design?
CAUSE-010

Over-Automation (18 incidents)

Excessive delegation of decision-making authority to AI systems in contexts that require human judgment, oversight, or intervention capability.

How this factor appears in incidents

  • Unsupervised critical decisions made entirely by AI without human review
  • Removed human override in automated workflows affecting individuals
  • Contextual judgment gaps in fully automated high-stakes domains
  • Blocked user appeals for fully automated decisions with no recourse
  • Cascading automated failures without human checkpoints to intervene

Associated assets & technologies

Large Language Models (8) Decision Automation (6) Autonomous Agents (4) Industrial Control Systems (4) Content Platforms (1) chatbots (1)

Control domains: Product governance, Human-in-the-loop design, Operational risk

Likely owner: Product / Risk

Documented incidents (18) — showing top 10

View all 18 incidents involving Over-Automation →

Related causal factors

If Over-Automation risks are high, also assess Accountability Vacuum and Hallucination Tendency — fully automated systems without clear accountability and with hallucination-prone models are where the worst harms accumulate.

Assessment checklist

  • Is a documented and meaningful human override preserved for each high-stakes automated decision?
  • Are clear boundaries established for AI decision authority versus human authority?
  • Are circuit breakers implemented that escalate to human review on uncertainty or anomaly?
  • Do affected individuals have an accessible appeals process for automated decisions?

Systemic & Organizational

49 incident references across 3 factors

CAUSE-013

Regulatory Gap (21 incidents)

Absence or inadequacy of legal frameworks, enforcement mechanisms, or regulatory standards governing the development, deployment, or use of AI systems in specific contexts.

How this factor appears in incidents

  • Unregulated sector deployment with no applicable AI-specific regulation
  • Cross-jurisdictional exploitation to avoid regulatory requirements
  • Regulatory lag behind rapid AI capability development
  • Missing enforcement mechanisms for existing AI guidelines and standards
  • Novel harm blindness in existing regulatory frameworks

Associated assets & technologies

Large Language Models (6) Content Platforms (6) Training Datasets (6) Decision Automation (6) Foundation Models (4) Autonomous Agents (3)

Control domains: Legal compliance, Regulatory monitoring, Policy

Likely owner: Legal / Policy

Documented incidents (21) — showing top 10

View all 21 incidents involving Regulatory Gap →

Related causal factors

If Regulatory Gap risks are high, also assess Accountability Vacuum and Competitive Pressure — absent regulation combined with competitive pressure and unclear accountability creates conditions for systematic harm.

Assessment checklist

  • Has a regulatory gap analysis been performed for each AI product deployment?
  • Are voluntary governance standards implemented ahead of regulatory requirements?
  • Do we participate in relevant industry standard-setting bodies?
  • Is regulatory monitoring in place for AI governance developments in all jurisdictions?
CAUSE-015

Competitive Pressure (12 incidents)

Market dynamics and organizational incentives that prioritize speed of AI deployment over safety, testing, or responsible development practices.

How this factor appears in incidents

  • Shortened safety testing to meet competitive deadlines
  • Premature capability deployment before adequate safety evaluation
  • Immature system launch driven by market pressure rather than readiness
  • Safety deprioritization in favor of public launch timelines
  • Under-resourced safety research relative to capability investment

Associated assets & technologies

Large Language Models (6) Decision Automation (3) Autonomous Agents (2) Recommender Systems (2) api-interfaces (1) Voice Synthesis (1)

Control domains: Incentive structures, Product portfolio decisions, Risk appetite

Likely owner: Exec / Product

Documented incidents (12) — showing top 10

View all 12 incidents involving Competitive Pressure →

Related causal factors

If Competitive Pressure risks are high, also assess Insufficient Safety Testing and Regulatory Gap — market pressure most directly erodes safety testing, especially in unregulated domains.

Assessment checklist

  • Are pre-deployment safety gates established that cannot be overridden by commercial timelines?
  • Is safety evaluation built into development timelines as a required phase?
  • Do we support industry-wide safety standards that level the competitive playing field?
  • Do organizational incentives reward responsible development alongside capability?
CAUSE-014

Accountability Vacuum (16 incidents)

Diffusion or absence of clear responsibility for AI system outcomes across the development, deployment, and use chain, leaving harmed parties without recourse.

How this factor appears in incidents

  • No identifiable responsible party when AI systems cause harm
  • Responsibility shifting between developers, deployers, and users
  • Liability disclaimers in terms of service for AI-mediated decisions
  • Obscured deployment authorization hiding who approved harmful AI use
  • No recourse pathway for individuals harmed by AI system outputs

Associated assets & technologies

Large Language Models (5) Decision Automation (5) Autonomous Agents (4) Industrial Control Systems (3) Training Datasets (3) Biometric Data (2)

Control domains: RACI, Incident response, Liability and escalation

Likely owner: Exec / Governance

Documented incidents (16) — showing top 10

View all 16 incidents involving Accountability Vacuum →

Related causal factors

If Accountability Vacuum risks are high, also assess Regulatory Gap and Model Opacity — diffuse responsibility is most harmful when regulation is absent and model decisions cannot be explained.

Assessment checklist

  • Are clear chains of responsibility defined across the AI lifecycle for this system?
  • Are AI impact assessments conducted that assign accountability for foreseeable harms?
  • Is a responsible officer designated for each AI deployment?
  • Are accessible recourse mechanisms established for individuals harmed by AI systems?

Causal factors are assigned during incident classification based on evidence analysis. A single incident typically involves multiple contributing factors that interact to enable or amplify harm. Factor definitions and assignments follow the TopAIThreats methodology. Stable identifiers (CAUSE-001 through CAUSE-015) are permanent and never reused.