An incident-backed reference to the AI technologies and data types involved in documented threat incidents. Each asset profile is grounded in real-world evidence, mapped to organizational control domains, and linked to the incidents that demonstrate it.
The 12 assets are organized across 4 types that map to the AI technology stack: Data (training inputs and identity artifacts), Model (AI models and generation systems), System (platforms and decision engines), and Infrastructure (physical and financial systems with AI integration). Each asset profile includes a definition, incident signatures, co-occurring causal factors, control domains with likely ownership, and documented incidents.
Most incidents involve multiple assets across types. These assets are not mutually exclusive — treat them as an interconnected technology surface, not isolated components.
12 assets · 4 types · 170 references across 97 incidents · Last updated: 2026-03-03
Assets in Documented Incidents
Ranked by incident count. Click any asset to jump to its full profile.
Collections of structured or unstructured data used to train, fine-tune, or evaluate AI models. Training data quality, representativeness, and provenance directly determine model behavior and failure modes.
How this asset appears in incidents
Biased or unrepresentative samples leading to discriminatory model outputs across demographic groups
Poisoned training data introducing backdoors or adversarial behavior into production models
Unauthorized data inclusion incorporating copyrighted, personal, or sensitive content without consent
Mislabeled or noisy data causing systematic misclassification in deployed systems
Physiological or behavioral data used for identification, including facial geometry, voiceprints, fingerprints, gait patterns, and iris scans. Once compromised, biometric data cannot be revoked or reissued.
How this asset appears in incidents
Facial recognition databases scraped from public sources without consent for model training
Voiceprint cloning enabling real-time impersonation of specific individuals
Biometric spoofing bypassing authentication systems using AI-generated synthetic samples
Mass surveillance datasets aggregating biometric identifiers across jurisdictions without oversight
Digital or physical credentials used to verify identity, including passwords, tokens, certificates, API keys, and government-issued documents. AI-generated forgeries and credential theft undermine trust in verification systems.
How this asset appears in incidents
AI-generated identity documents passing automated and human verification checks
Credential stuffing at scale using AI to optimize password guessing and reuse attacks
Synthetic identity creation combining real and fabricated data to produce convincing personas
API key and token extraction through prompt injection or data leakage from AI systems
General-purpose text generation and reasoning models trained on broad internet-scale corpora (e.g., GPT-4, Claude, Gemini, LLaMA). Their generality makes them the most frequently involved asset class across documented incidents.
How this asset appears in incidents
Prompt injection and jailbreaks overriding safety instructions through crafted user input
Hallucinated outputs generating confident but factually incorrect information in high-stakes contexts
System prompt leakage exposing proprietary instructions and business logic to users
Automated content generation at scale for disinformation, fraud, or social engineering campaigns
Agentic tool misuse executing unintended actions via plugin or MCP integrations
Large pre-trained models (text, image, audio, multimodal) that serve as the base for downstream fine-tuning and application development. Supply-chain risks propagate from foundation models to all downstream deployments.
How this asset appears in incidents
Supply-chain compromise where vulnerabilities in the base model affect all downstream applications
Capability overhang where latent capabilities emerge unexpectedly in fine-tuned variants
Model weight theft through insider access or infrastructure compromise at training organizations
Dual-use capability concerns when general-purpose models enable harmful applications without modification
AI systems capable of generating, cloning, or manipulating human voice audio with high fidelity. Modern voice synthesis requires only seconds of reference audio to produce convincing impersonation.
How this asset appears in incidents
Real-time voice cloning in phone calls and video conferences targeting executives or family members
Voice authentication bypass using synthesized voiceprints to access financial or government systems
Fabricated audio evidence used in legal proceedings, political manipulation, or extortion
Celebrity and public figure impersonation for scams, disinformation, or unauthorized endorsements
Algorithmic systems that curate, rank, and personalize content delivery to users across platforms. Recommender systems shape information exposure at population scale and can amplify harmful content through engagement optimization.
How this asset appears in incidents
Filter bubble creation progressively narrowing user information exposure through personalization
Engagement-driven amplification of sensational, divisive, or harmful content to maximize interaction
Radicalization pathways where recommendation chains lead users toward increasingly extreme content
Manipulation of ranking signals by adversaries to promote disinformation or suppress legitimate content
Digital platforms that host, distribute, or moderate user-generated and AI-generated content at scale, including social media, search engines, and publishing systems. These platforms serve as both distribution channels and targets for AI-enabled threats.
How this asset appears in incidents
AI-generated content floods overwhelming moderation systems with synthetic text, images, or video
Automated influence operations using platform distribution to amplify coordinated disinformation
Deepfake distribution publishing synthetic media that erodes trust in authentic content
Moderation evasion using AI to craft content that bypasses automated safety filters
AI systems that make or inform consequential decisions about individuals, including hiring, lending, sentencing, medical diagnosis, and benefits eligibility. These systems operate at the intersection of algorithmic efficiency and individual rights.
How this asset appears in incidents
Discriminatory scoring producing systematically biased outcomes across protected groups
Opaque denial decisions without explanation or meaningful appeal pathway for affected individuals
Automation bias where human operators default to machine recommendations without critical review
Cascading automated decisions where one system's output feeds into downstream decision chains
AI systems that independently plan, execute multi-step tasks, use external tools, and operate with persistent state. Agentic systems combine LLM reasoning with real-world action capability, creating novel attack surfaces.
How this asset appears in incidents
Unintended autonomous actions executing harmful tool calls or API requests without user approval
Persistent state manipulation where compromised agent memory affects future decisions and actions
Cross-tool privilege escalation gaining access to resources beyond intended scope through chained tool use
Multi-agent coordination failures where interacting agents produce emergent harmful behavior
Prompt injection via tool outputs where external data sources inject instructions into agent reasoning
Operational technology (OT) and SCADA systems that control physical infrastructure such as power grids, water treatment, manufacturing, and transportation. AI integration into these systems introduces cyber-physical attack surfaces.
How this asset appears in incidents
AI-enhanced reconnaissance of industrial control networks using automated vulnerability discovery
Adversarial manipulation of sensor data feeding AI-based process control systems
Predictive maintenance exploitation where compromised AI models cause deliberate equipment failure
Cascading infrastructure failures when AI-controlled systems propagate errors across interconnected utilities
AI-integrated banking, trading, payments, and insurance infrastructure. Financial systems are high-value targets where AI-enabled fraud, market manipulation, and systemic risk intersect.
How this asset appears in incidents
AI-enhanced fraud schemes using synthetic identities and deepfakes to bypass financial verification
Algorithmic trading manipulation exploiting or gaming AI-driven market-making and trading systems
Automated lending discrimination where AI credit scoring produces disparate impact across demographics
Systemic risk amplification when correlated AI trading strategies create flash crashes or liquidity crises
Assets are assigned during incident classification based on evidence analysis. A single incident typically involves multiple technologies that interact to enable or amplify harm. Asset definitions and assignments follow the TopAIThreats methodology. Stable identifiers (ASST-001 through ASST-012) are permanent and never reused.