2026 Year-to-Date AI Threat Report
So far in 2026, TopAIThreats has documented 7 AI-enabled threat incidents spanning 5 of the 8 threat domains in our taxonomy. Information Integrity leads with 29% of documented incidents. 71% of incidents are rated critical or high severity. 6 incidents remain open.
This is a living report that updates with each site build as new incidents are added to the incident database. All analysis is grounded in the data and follows the 8-domain taxonomy.
All figures computed at build time (2026-03-23). Incidents may appear in multiple domains via secondary patterns.
Scope & Methodology
date_occurred value in calendar year 2026.
Each incident is classified using the 8-domain taxonomy and rated on a four-level severity scale (critical, high, medium, low).
All figures on this page are computed programmatically at build time from the incident database; no manual curation or editorial selection is applied to the aggregate statistics.
For full classification definitions and methodology, see the taxonomy reference.
Key Findings
- The leading threat domain is Information Integrity, accounting for 29% of incidents (2 of 7).
- 71% of incidents are rated critical or high severity (5 of 7).
- The most frequently observed threat pattern is Overreliance & Automation Bias, appearing in 2 incidents.
- Cross-Sector is the most affected sector, with 4 incidents.
- Of all 2026 incidents, 6 remain open and 1 are resolved (86% open).
Domain Analysis
Activity so far is distributed across 5 domains, led by Information Integrity (2 incidents, 29%) and Agentic Systems (2 incidents). This spread suggests AI threats continue to materialize across multiple fronts rather than concentrating in a single area.
| Domain | Count |
|---|---|
| Information Integrity | 2 |
| Agentic Systems | 2 |
| Human-AI Control | 1 |
| Privacy & Surveillance | 1 |
| Economic & Labor | 1 |
Severity & Failure Stages
A majority (71%) of 2026 incidents so far are rated critical or high severity, indicating that the incidents reaching public documentation tend to involve substantial harm rather than minor disruptions.
Severity Breakdown
Failure Stage Distribution
Failure stages represent an escalation ladder: signal (capability demonstrated) → near miss (harm avoided) → harm (measurable damage) → systemic risk (structural threat pattern).
Top Threat Patterns
Threat patterns are relatively evenly distributed so far in 2026, with Overreliance & Automation Bias appearing most frequently (2 incidents). This early spread may shift as more incidents are documented through the year.
Sectors Affected
AI-enabled threats have affected at least 9 distinct sectors so far in 2026. Cross-Sector is the most impacted sector (4 incidents), followed by Finance (2) and Technology (2).
| Sector | Incidents |
|---|---|
| Cross-Sector | 4 |
| Finance | 2 |
| Technology | 2 |
| Corporate | 2 |
| Transportation | 1 |
| Public Safety | 1 |
| Healthcare | 1 |
| Employment | 1 |
| Media | 1 |
Resolution Status
Only 14% of 2026 incidents have been resolved so far, with 6 still open. This low resolution rate is expected for a year still in progress — many incidents are under active investigation or remediation, and resolution often follows months after initial documentation.
Policy & Governance Implications
The 7 incidents documented in 2026 to date provide empirical grounding for several policy discussions currently underway at the international level. The presence of 1 critical-severity incident aligns with concerns raised in the International AI Safety Report (2025), which identified the potential for high-impact harms from advanced AI systems as a near-term governance challenge. The OECD AI Incidents Monitor maintains a parallel tracking effort; cross-referencing both databases may offer a more comprehensive view of the evolving threat landscape.
All 2026 Incidents
7 incidents that occurred in 2026, sorted by date (most recent first).
Tesla Autopilot involved in 13 fatal crashes, US regulator finds
The U.S. National Highway Traffic Safety Administration (NHTSA) opened a formal investigation into Tesla's Autopilot system following at least 13 fatal crashes where the driver-assistance system was engaged or suspected to be active.
Developer: TeslaIndividual jailed for online gambling fraud using stolen identities
An individual was jailed for using AI-generated deepfake identity documents to create fraudulent accounts on online gambling platforms, representing an early criminal prosecution for AI-enabled identity fraud.
Developer: Unknown (commercial AI document generation tools)Disrupting malicious uses of AI: June 2025 | OpenAI
OpenAI published a report documenting how threat actors from multiple countries attempted to use its models for malicious purposes including surveillance, influence operations, and social engineering, detailing its disruption efforts.
Developer: OpenAI (model developer)Unit 42 Demonstrates Persistent Memory Injection in Amazon Bedrock Agents
Palo Alto Networks Unit 42 demonstrated a proof-of-concept attack chain where a malicious web page injected hidden prompts into an Amazon Bedrock Agent, which stored attacker instructions in long-term memory and later exfiltrated data during unrelated tasks.
Developer: Amazon Web Services (Bedrock platform)AI Recommendation Poisoning via 'Summarize with AI' Buttons (31 Companies)
Microsoft Defender identified over 50 distinct hidden prompts from 31 companies across 14 industries, embedded in 'Summarize with AI' style buttons that inject persistent memory commands into AI assistants, biasing future recommendations toward specific brands.
Developer: 31 unnamed companies across 14 industriesAI impacting labor market like a tsunami as layoff fears mount
Multiple reports documented a rapid acceleration of AI-driven workforce displacement across sectors, with major corporations announcing significant layoffs directly attributed to AI automation and efficiency gains.
Developer: Multiple AI technology companiesNew Zealand AI News Pages Flood Facebook with Rewritten Stories and Synthetic Images
At least 10 Facebook pages scraped legitimate New Zealand news articles, rewrote them using AI, and published them with unlabeled AI-generated images — including fabricated photos of real people. The 'NZ News Hub' page accumulated thousands of engagements before removal, while similar pages remain active.
Developer: Unknown operators of AI news pages