Skip to main content
TopAIThreats home TOP AI THREATS
Year-to-Date In progress 2026 · as of 2026-05-08

2026 Year-to-Date AI Threat Report

So far in 2026, TopAIThreats has documented 68 AI-enabled threat incidents spanning 8 of the 8 threat domains in our taxonomy. Human-AI Control leads with 19% of documented incidents. 96% of incidents are rated critical or high severity. 53 incidents remain open.

This is a living report that updates with each site build as new incidents are added to the incident database. All analysis is grounded in the data and follows the 8-domain taxonomy.

All figures computed at build time (2026-05-08). Incidents may appear in multiple domains via secondary patterns.

Scope & Methodology
This report covers all incidents in the TopAIThreats database with a date_occurred value in calendar year 2026. Each incident is classified using the 8-domain taxonomy and rated on a four-level severity scale (critical, high, medium, low). All figures on this page are computed programmatically at build time from the incident database; no manual curation or editorial selection is applied to the aggregate statistics. For full classification definitions and methodology, see the taxonomy reference.
68
Incidents
8
Domains
53
Open
23
Critical

Key Findings

  • The leading threat domain is Human-AI Control, accounting for 19% of incidents (13 of 68).
  • 96% of incidents are rated critical or high severity (65 of 68).
  • The most frequently observed threat pattern is Accumulative Risk & Trust Erosion, appearing in 12 incidents.
  • Technology is the most affected sector, with 48 incidents.
  • Of all 2026 incidents, 53 remain open and 15 are resolved (78% open).

Domain Analysis

Activity so far is distributed across 8 domains, led by Human-AI Control (13 incidents, 19%) and Information Integrity (10 incidents). This spread suggests AI threats continue to materialize across multiple fronts rather than concentrating in a single area.

Severity & Failure Stages

A majority (96%) of 2026 incidents so far are rated critical or high severity, indicating that the incidents reaching public documentation tend to involve substantial harm rather than minor disruptions. 65% of incidents have reached the "harm" failure stage — meaning measurable damage was documented, not just capability demonstrations or near-misses.

Severity Breakdown

critical
23
34%
high
42
62%
medium
3
4%
low
0
0%

Failure Stage Distribution

Signal 4
Near Miss 10
Harm 44
Systemic Risk 10

Failure stages represent an escalation ladder: signal (capability demonstrated) → near miss (harm avoided) → harm (measurable damage) → systemic risk (structural threat pattern).

Sectors Affected

AI-enabled threats have affected at least 10 distinct sectors so far in 2026. Technology is the most impacted sector (48 incidents), followed by Government (12) and Media (8).

Resolution Status

Only 22% of 2026 incidents have been resolved so far, with 53 still open. This low resolution rate is expected for a year still in progress — many incidents are under active investigation or remediation, and resolution often follows months after initial documentation.

15
Resolved
53
Open

Policy & Governance Implications

The 68 incidents documented in 2026 to date provide empirical grounding for several policy discussions currently underway at the international level. The presence of 23 critical-severity incidents aligns with concerns raised in the International AI Safety Report (2025), which identified the potential for high-impact harms from advanced AI systems as a near-term governance challenge. The OECD AI Incidents Monitor maintains a parallel tracking effort; cross-referencing both databases may offer a more comprehensive view of the evolving threat landscape.

All 2026 Incidents

68 incidents that occurred in 2026, sorted by date (most recent first).

INC-26-0041 high

NAACP Sues xAI Over Illegal Gas Turbines Powering Colossus 2 Data Center

The NAACP, Southern Environmental Law Center (SELC), and Earthjustice filed a federal lawsuit against xAI alleging Clean Air Act violations for unpermitted gas turbines in Southaven, Mississippi, built to power its Colossus 2 data center in Memphis.

Developer: xAI
INC-26-0097 critical

Oracle Cuts 20,000–30,000 Jobs to Fund $50B AI Infrastructure Push (2026)

Oracle cut an estimated 20,000–30,000 jobs in March 2026 to fund $50B in AI infrastructure — the largest single AI-linked corporate layoff on record.

Developer: Oracle
INC-26-0074 high

Claude Mythos Model Leak — CMS Error Exposes Draft Blog Describing 'Unprecedented Cybersecurity Risks'

A CMS configuration error at Anthropic exposed approximately 3,000 unpublished assets, including a draft blog post describing an unreleased model called 'Claude Mythos' as posing 'unprecedented cybersecurity risks.' The draft stated Mythos outperforms Opus 4.6 in cybersecurity and reasoning capabilities. The leak raised questions about Anthropic's internal assessment of its own models' dangerous capabilities.

Developer: Anthropic
INC-26-0015 critical

TeamPCP Compromises LiteLLM via Poisoned Trivy Security Scanner

Criminal group TeamPCP compromised the LiteLLM AI proxy library — downloaded approximately 3.4 million times daily from PyPI — by first poisoning the Trivy security scanner's GitHub Action to steal PyPI publishing tokens, then uploading backdoored LiteLLM versions that harvested cloud credentials, SSH keys, and Kubernetes tokens from affected environments.

Developer: LiteLLM (BerriAI)
INC-26-0059 high

OpenAI Shuts Down Sora Video Generator — Celebrity Deepfakes and $15M/Day Losses

OpenAI shut down its Sora video generation application after widespread creation of celebrity deepfakes. Sora peaked at 3.3 million downloads before declining to 1.1 million. The service cost $15 million per day in inference costs versus only $2.1 million in lifetime revenue, and its controversy killed a potential $1 billion deal with Disney.

Developer: OpenAI
INC-26-0094 high

White House AI Framework Calls on Congress to Preempt State AI Laws, Leverages Federal Funding

The White House released the 'National Policy Framework for Artificial Intelligence' on March 20, 2026, calling on Congress to preempt state AI laws that 'impose undue burdens.' The framework proposed that states should not regulate AI development, should not penalize developers for third-party misuse, and should not burden lawful AI use. Enforcement mechanisms included a DOJ AI Litigation Task Force to challenge state laws in federal court and BEAD broadband funding leverage to penalize states with 'onerous' AI laws. The Colorado AI Act was explicitly named as a problematic example. The framework was prepared with input from AI industry coalition AI Progress, whose members include Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI.

INC-26-0043 high

Meta AI Agent Causes Sev-1 Data Exposure; Director's OpenClaw Agent Deletes 200 Emails Ignoring Stop Commands

Two separate AI agent incidents at Meta: an internal agent's incorrect technical advice led to a Sev-1 data exposure for two hours on March 18, 2026; separately, Director of Alignment Summer Yue's OpenClaw agent deleted over 200 emails in late February 2026, ignoring STOP commands due to context window compaction.

Developer: Meta
INC-26-0065 high

Danny Bones — First AI Slopaganda Influencer Funded by Political Party (UK)

The UK far-right party Advance UK funded 'Danny Bones,' a fully AI-generated rapper persona used to push anti-immigration content on social media. Videos showed the AI persona wearing 'MASS DEPORTATION UNIT' gear. The persona was later repurposed for byelection campaigns. This represents the first documented case of a political party funding an AI-generated influencer for political propaganda.

Developer: Unspecified AI generation tools