Skip to main content
TopAIThreats home TOP AI THREATS

Assets & Technologies

An incident-backed reference to the AI technologies and data types involved in documented threat incidents. Each asset profile is grounded in real-world evidence, mapped to organizational control domains, and linked to the incidents that demonstrate it.

The 12 assets are organized across 4 types that map to the AI technology stack: Data (training inputs and identity artifacts), Model (AI models and generation systems), System (platforms and decision engines), and Infrastructure (physical and financial systems with AI integration). Each asset profile includes a definition, incident signatures, co-occurring causal factors, control domains with likely ownership, and documented incidents.

Most incidents involve multiple assets across types. These assets are not mutually exclusive — treat them as an interconnected technology surface, not isolated components.

12 assets · 4 types · 170 references across 97 incidents · Last updated: 2026-03-03

Assets in Documented Incidents

Ranked by incident count. Click any asset to jump to its full profile.

Code Asset Incidents
ASST-009 Large Language Models 46
ASST-004 Decision Automation 19
ASST-003 Content Platforms 17
ASST-001 Autonomous Agents 16
ASST-012 Voice Synthesis 11
ASST-011 Training Datasets 10
ASST-006 Foundation Models 7
ASST-007 Identity Credentials 7
ASST-002 Biometric Data 6
ASST-008 Industrial Control Systems 5
ASST-010 Recommender Systems 3
ASST-005 Financial Systems 2
Export: ·

Data

23 incident references across 3 assets

ASST-011

Training Datasets (10 incidents)

Collections of structured or unstructured data used to train, fine-tune, or evaluate AI models. Training data quality, representativeness, and provenance directly determine model behavior and failure modes.

How this asset appears in incidents

  • Biased or unrepresentative samples leading to discriminatory model outputs across demographic groups
  • Poisoned training data introducing backdoors or adversarial behavior into production models
  • Unauthorized data inclusion incorporating copyrighted, personal, or sensitive content without consent
  • Mislabeled or noisy data causing systematic misclassification in deployed systems

Control domains: Data governance, Data quality, Privacy compliance

Likely owner: Data Engineering / Responsible AI

Documented incidents (10)

Risk considerations

  • Data provenance gaps make it difficult to audit what the model learned and from whom
  • Retroactive removal of problematic data from trained weights remains technically infeasible
  • Scale of modern training corpora makes manual review impractical, increasing contamination risk
ASST-002

Biometric Data (6 incidents)

Physiological or behavioral data used for identification, including facial geometry, voiceprints, fingerprints, gait patterns, and iris scans. Once compromised, biometric data cannot be revoked or reissued.

How this asset appears in incidents

  • Facial recognition databases scraped from public sources without consent for model training
  • Voiceprint cloning enabling real-time impersonation of specific individuals
  • Biometric spoofing bypassing authentication systems using AI-generated synthetic samples
  • Mass surveillance datasets aggregating biometric identifiers across jurisdictions without oversight

Control domains: Privacy compliance, Identity verification, Data minimization

Likely owner: Privacy / Security

Documented incidents (6)

Risk considerations

  • Biometric identifiers are irrevocable — once compromised, they cannot be changed like passwords
  • Cross-system linking enables tracking individuals across contexts without their knowledge
  • Accuracy disparities across demographic groups produce disproportionate false-positive rates
ASST-007

Identity Credentials (7 incidents)

Digital or physical credentials used to verify identity, including passwords, tokens, certificates, API keys, and government-issued documents. AI-generated forgeries and credential theft undermine trust in verification systems.

How this asset appears in incidents

  • AI-generated identity documents passing automated and human verification checks
  • Credential stuffing at scale using AI to optimize password guessing and reuse attacks
  • Synthetic identity creation combining real and fabricated data to produce convincing personas
  • API key and token extraction through prompt injection or data leakage from AI systems

Control domains: Identity & access management, Fraud controls, KYC / AML

Likely owner: Security / Fraud

Documented incidents (7)

Risk considerations

  • AI-generated documents achieve increasing photorealism, eroding trust in physical and digital credentials
  • Compromised credentials propagate across interconnected systems via federation and SSO
  • Automated credential attacks operate at speeds that overwhelm traditional rate-limiting defenses

Model

67 incident references across 4 assets

ASST-009

Large Language Models (46 incidents)

General-purpose text generation and reasoning models trained on broad internet-scale corpora (e.g., GPT-4, Claude, Gemini, LLaMA). Their generality makes them the most frequently involved asset class across documented incidents.

How this asset appears in incidents

  • Prompt injection and jailbreaks overriding safety instructions through crafted user input
  • Hallucinated outputs generating confident but factually incorrect information in high-stakes contexts
  • System prompt leakage exposing proprietary instructions and business logic to users
  • Automated content generation at scale for disinformation, fraud, or social engineering campaigns
  • Agentic tool misuse executing unintended actions via plugin or MCP integrations

Control domains: Application security, LLM security testing, Output verification

Likely owner: AppSec / AI Platform

Documented incidents (46) — showing top 10

View all 46 incidents involving Large Language Models →

Risk considerations

  • Broad capability surface means new attack vectors emerge with each deployment context
  • Fine-tuning on domain data can remove safety training, creating uncensored variants
  • Agentic deployments with tool access amplify the blast radius of prompt injection exploits
  • Rapid adoption outpaces organizational security maturity for LLM-specific threats
ASST-006

Foundation Models (7 incidents)

Large pre-trained models (text, image, audio, multimodal) that serve as the base for downstream fine-tuning and application development. Supply-chain risks propagate from foundation models to all downstream deployments.

How this asset appears in incidents

  • Supply-chain compromise where vulnerabilities in the base model affect all downstream applications
  • Capability overhang where latent capabilities emerge unexpectedly in fine-tuned variants
  • Model weight theft through insider access or infrastructure compromise at training organizations
  • Dual-use capability concerns when general-purpose models enable harmful applications without modification

Control domains: Model governance, Supply-chain security, Capability evaluation

Likely owner: AI Platform / Security

Documented incidents (7)

Risk considerations

  • Concentration risk: a small number of foundation models underpin thousands of production applications
  • Emergent capabilities may not be fully characterized before downstream deployment
  • Open-weight releases cannot be recalled once distributed, even when vulnerabilities are discovered
ASST-012

Voice Synthesis (11 incidents)

AI systems capable of generating, cloning, or manipulating human voice audio with high fidelity. Modern voice synthesis requires only seconds of reference audio to produce convincing impersonation.

How this asset appears in incidents

  • Real-time voice cloning in phone calls and video conferences targeting executives or family members
  • Voice authentication bypass using synthesized voiceprints to access financial or government systems
  • Fabricated audio evidence used in legal proceedings, political manipulation, or extortion
  • Celebrity and public figure impersonation for scams, disinformation, or unauthorized endorsements

Control domains: Fraud controls, Communication security, Authentication

Likely owner: Security / Fraud

Documented incidents (11) — showing top 10

View all 11 incidents involving Voice Synthesis →

Risk considerations

  • Minimal reference audio (3-10 seconds) enables convincing clones, lowering the barrier to impersonation
  • Real-time synthesis makes telephone-based verification unreliable as an authentication factor
  • Detection tools lag behind generation quality, creating an asymmetric advantage for attackers
ASST-010

Recommender Systems (3 incidents)

Algorithmic systems that curate, rank, and personalize content delivery to users across platforms. Recommender systems shape information exposure at population scale and can amplify harmful content through engagement optimization.

How this asset appears in incidents

  • Filter bubble creation progressively narrowing user information exposure through personalization
  • Engagement-driven amplification of sensational, divisive, or harmful content to maximize interaction
  • Radicalization pathways where recommendation chains lead users toward increasingly extreme content
  • Manipulation of ranking signals by adversaries to promote disinformation or suppress legitimate content

Control domains: Content policy, Algorithmic auditing, User safety

Likely owner: Product / Trust & Safety

Documented incidents (3)
ID Title Severity
INC-23-0009 RealPage AI Algorithmic Rent-Fixing high
INC-22-0002 Meta Housing Ad Discrimination DOJ Settlement high
INC-25-0020 Instacart AI-Driven Algorithmic Price Discrimination medium

Risk considerations

  • Engagement optimization objectives can conflict with user well-being and information quality
  • Population-scale influence operates below individual awareness thresholds
  • Feedback loops between user behavior and recommendations can produce emergent harmful patterns

System

52 incident references across 3 assets

ASST-003

Content Platforms (17 incidents)

Digital platforms that host, distribute, or moderate user-generated and AI-generated content at scale, including social media, search engines, and publishing systems. These platforms serve as both distribution channels and targets for AI-enabled threats.

How this asset appears in incidents

  • AI-generated content floods overwhelming moderation systems with synthetic text, images, or video
  • Automated influence operations using platform distribution to amplify coordinated disinformation
  • Deepfake distribution publishing synthetic media that erodes trust in authentic content
  • Moderation evasion using AI to craft content that bypasses automated safety filters

Control domains: Content moderation, Platform integrity, Trust & safety

Likely owner: Trust & Safety / Product

Documented incidents (17) — showing top 10

View all 17 incidents involving Content Platforms →

Risk considerations

  • Scale of content flow makes comprehensive human review impossible, creating moderation gaps
  • Network effects amplify harmful content before detection systems can respond
  • Cross-platform propagation means content removed from one platform resurfaces on others
ASST-004

Decision Automation (19 incidents)

AI systems that make or inform consequential decisions about individuals, including hiring, lending, sentencing, medical diagnosis, and benefits eligibility. These systems operate at the intersection of algorithmic efficiency and individual rights.

How this asset appears in incidents

  • Discriminatory scoring producing systematically biased outcomes across protected groups
  • Opaque denial decisions without explanation or meaningful appeal pathway for affected individuals
  • Automation bias where human operators default to machine recommendations without critical review
  • Cascading automated decisions where one system's output feeds into downstream decision chains

Control domains: Algorithmic auditing, Human-in-the-loop design, Fairness & ethics

Likely owner: Product / Risk / Legal

Documented incidents (19) — showing top 10

View all 19 incidents involving Decision Automation →

Risk considerations

  • Consequential decisions affecting rights and livelihoods demand higher reliability than current systems provide
  • Opacity of decision logic frustrates legal requirements for explanation and contestation
  • Speed and scale of automated decisions can cause widespread harm before errors are detected
ASST-001

Autonomous Agents (16 incidents)

AI systems that independently plan, execute multi-step tasks, use external tools, and operate with persistent state. Agentic systems combine LLM reasoning with real-world action capability, creating novel attack surfaces.

How this asset appears in incidents

  • Unintended autonomous actions executing harmful tool calls or API requests without user approval
  • Persistent state manipulation where compromised agent memory affects future decisions and actions
  • Cross-tool privilege escalation gaining access to resources beyond intended scope through chained tool use
  • Multi-agent coordination failures where interacting agents produce emergent harmful behavior
  • Prompt injection via tool outputs where external data sources inject instructions into agent reasoning

Control domains: Agent sandboxing, Tool access governance, Monitoring & observability

Likely owner: AppSec / AI Platform

Documented incidents (16) — showing top 10

View all 16 incidents involving Autonomous Agents →

Risk considerations

  • Tool access transforms language model vulnerabilities into real-world action capabilities
  • Persistent state means a single compromise can affect all subsequent agent interactions
  • Multi-step planning makes behavior harder to predict and audit than single-inference systems
  • The MCP and plugin ecosystem introduces supply-chain risks from untrusted tool providers

Infrastructure

7 incident references across 2 assets

ASST-008

Industrial Control Systems (5 incidents)

Operational technology (OT) and SCADA systems that control physical infrastructure such as power grids, water treatment, manufacturing, and transportation. AI integration into these systems introduces cyber-physical attack surfaces.

How this asset appears in incidents

  • AI-enhanced reconnaissance of industrial control networks using automated vulnerability discovery
  • Adversarial manipulation of sensor data feeding AI-based process control systems
  • Predictive maintenance exploitation where compromised AI models cause deliberate equipment failure
  • Cascading infrastructure failures when AI-controlled systems propagate errors across interconnected utilities

Control domains: OT security, Safety-critical systems, Physical security

Likely owner: OT Security / Engineering

Documented incidents (5)

Risk considerations

  • Physical consequences of compromise include safety hazards, environmental damage, and loss of life
  • Legacy OT systems lack the security controls assumed by modern AI deployment frameworks
  • Convergence of IT and OT networks expands the attack surface for AI-enabled threats
ASST-005

Financial Systems (2 incidents)

AI-integrated banking, trading, payments, and insurance infrastructure. Financial systems are high-value targets where AI-enabled fraud, market manipulation, and systemic risk intersect.

How this asset appears in incidents

  • AI-enhanced fraud schemes using synthetic identities and deepfakes to bypass financial verification
  • Algorithmic trading manipulation exploiting or gaming AI-driven market-making and trading systems
  • Automated lending discrimination where AI credit scoring produces disparate impact across demographics
  • Systemic risk amplification when correlated AI trading strategies create flash crashes or liquidity crises

Control domains: Financial controls, Regulatory compliance, Fraud detection

Likely owner: Risk / Compliance / CISO

Documented incidents (2)

Risk considerations

  • Financial losses are immediate and can cascade rapidly through interconnected institutions
  • Regulatory frameworks are still adapting to AI-specific risks in financial services
  • High-frequency automated systems operate faster than human oversight can intervene

Assets are assigned during incident classification based on evidence analysis. A single incident typically involves multiple technologies that interact to enable or amplify harm. Asset definitions and assignments follow the TopAIThreats methodology. Stable identifiers (ASST-001 through ASST-012) are permanent and never reused.