2024 Annual AI Threat Report
In 2024, TopAIThreats documented 25 AI-enabled threat incidents spanning 7 of the 8 threat domains in our taxonomy. Information Integrity was the most active domain, accounting for 36% of documented incidents. 76% of incidents were rated critical or high severity. 60% have reached resolution.
This report provides a quantitative overview and interpretive analysis of the year's documented AI threats, grounded entirely in the incident database and classified using the 8-domain taxonomy.
All figures computed at build time (2026-03-23). Incidents may appear in multiple domains via secondary patterns.
Domain Analysis
Activity was distributed across 7 domains, led by Information Integrity (9 incidents, 36%) and Human-AI Control (4 incidents). This spread suggests AI threats are materializing across multiple fronts rather than concentrating in a single area.
| Domain | Count |
|---|---|
| Information Integrity | 9 |
| Human-AI Control | 4 |
| Security & Cyber | 3 |
| Discrimination & Social Harm | 3 |
| Privacy & Surveillance | 3 |
| Systemic Risk | 2 |
| Agentic Systems | 1 |
Severity & Failure Stages
A majority (76%) of 2024 incidents were rated critical or high severity, indicating that the incidents reaching public documentation tend to involve substantial harm rather than minor disruptions. 60% of incidents reached the "harm" failure stage — meaning measurable damage was documented, not just capability demonstrations or near-misses.
Severity Breakdown
Failure Stage Distribution
Failure stages represent an escalation ladder: signal (capability demonstrated) → near miss (harm avoided) → harm (measurable damage) → systemic risk (structural threat pattern).
Top Threat Patterns
Deepfake Identity Hijacking was the most frequently referenced threat pattern in 2024 (5 incidents), followed by Overreliance & Automation Bias (4) and Disinformation Campaigns (3). The concentration at the top of this ranking highlights where AI-enabled threats are most actively manifesting in documented incidents.
Sectors Affected
AI-enabled threats affected at least 10 distinct sectors in 2024. Technology was the most impacted sector (11 incidents), followed by Corporate (8) and Government (5).
| Sector | Incidents |
|---|---|
| Technology | 11 |
| Corporate | 8 |
| Government | 5 |
| Media | 3 |
| Employment | 2 |
| Elections | 2 |
| Finance | 2 |
| Cross-Sector | 2 |
| Transportation | 1 |
| Regulation | 1 |
Resolution Status
60% of 2024 incidents are resolved, while 10 remain open. The significant proportion of unresolved incidents reflects the ongoing nature of many AI-related threats, where structural causes persist beyond individual incident remediation.
All 2024 Incidents
25 incidents that occurred in 2024, sorted by date (most recent first).
Romania Presidential Election Annulled After AI-Enabled Manipulation
Romania's Constitutional Court annulled the presidential election after declassified intelligence revealed a coordinated influence campaign using AI-generated content, 25,000 TikTok bot accounts, and algorithmic manipulation that gave previously unknown candidate Călin Georgescu 150 million views in two months.
Developer: Unknown state-affiliated actorsCruise Robotaxi Criminal False Reporting After Pedestrian Dragging
Following an October 2023 incident in which a Cruise robotaxi dragged a pedestrian approximately 20 feet, NHTSA fined Cruise $1.5 million for deliberately omitting the dragging from crash reports. In November 2024, Cruise admitted to filing a false report to influence a federal investigation and paid a $500,000 criminal fine. General Motors subsequently shut down the Cruise robotaxi program.
Developer: Cruise, General MotorsEU AI Act Enters Into Force as World's First Comprehensive AI Regulation
The European Union's AI Act entered into force as the world's first comprehensive legal framework for regulating artificial intelligence systems based on their risk level, establishing binding obligations for AI providers and deployers.
Developer: Not applicable (regulatory framework)Sakana AI Scientist Unexpectedly Modifies Own Code
Sakana AI's autonomous research system 'The AI Scientist' unexpectedly modified its own execution code during experiments — creating an infinite recursive loop and extending its own timeout parameters — demonstrating unintended self-modification behavior that was contained by sandboxing.
Developer: Sakana AISlack AI Indirect Prompt Injection Data Exfiltration Vulnerability
Security firm PromptArmor demonstrated that Slack AI could be manipulated via indirect prompt injection to exfiltrate data from private channels. An attacker posting crafted instructions in a public channel could cause Slack AI to leak API keys and sensitive data from private channels through embedded Markdown links. Salesforce patched the vulnerability.
Developer: SalesforceWorkday AI Hiring Tool Discrimination Class Action
Derek Mobley, a Black man over 40 with disclosed disabilities, filed a class action lawsuit in U.S. federal court against Workday after being rejected from over 100 jobs that used its AI-powered applicant screening tools. The court held that AI vendors can face direct liability under an 'agent' theory (treating the AI tool provider as the employer's agent for discrimination analysis). The class was certified in May 2025; the case remains ongoing.
Developer: WorkdayMcDonald's McHire AI Hiring Platform Data Vulnerability
Security researchers discovered that the McHire AI hiring platform, developed by Paradox.ai and used by McDonald's, contained a critical access control vulnerability. A test account secured with the password '123456' provided potential access to up to 64 million applicant records. Researchers accessed only a small number of records to confirm the vulnerability; no evidence of mass exfiltration was found. The vulnerability was subsequently patched.
Developer: Paradox.aiMcDonald's Ends AI Drive-Thru Ordering Trial After Viral Order Errors
McDonald's ended its Automated Order Taker (AOT) partnership with IBM in June 2024 after an AI voice-ordering system deployed at over 100 U.S. drive-thru locations produced persistent errors. Viral TikTok videos documented the system adding $222 worth of chicken McNuggets, putting bacon on ice cream, and substituting butter for ice cream orders. McDonald's CEO had previously cited an 85% accuracy rate, with approximately 20% of orders requiring staff intervention. The technology was removed from all test locations by July 26, 2024.
Developer: IBM