| INC-26-0003 | Tesla Autopilot involved in 13 fatal crashes, US regulator finds | critical | 2026-02-20 | Human-AI Control | Tesla | confirmed | The U.S. National Highway Traffic Safety Administration (NHTSA) opened a formal investigation into Tesla's Autopilot system following at least 13 fatal crashes where the driver-assistance system was engaged or suspected to be active. | Tesla | Tesla vehicle occupants in fatal crashes, Other road users, Pedestrians | physical | — | — | Systemic Risk | 2026-02-20 |
| INC-26-0004 | Individual jailed for online gambling fraud using stolen identities | high | 2026-02-20 | Privacy & Surveillance | Unknown (commercial AI document generation tools) | confirmed | An individual was jailed for using AI-generated deepfake identity documents to create fraudulent accounts on online gambling platforms, representing an early criminal prosecution for AI-enabled identity fraud. | Convicted individual | Identity theft victims, Online gambling platforms, Financial integrity of regulated gambling markets | financialrights violation | — | — | Harm | 2026-02-20 |
| INC-26-0001 | Disrupting malicious uses of AI: June 2025 | OpenAI | medium | 2026-02-18 | Information Integrity | OpenAI (model developer) | alleged | OpenAI published a report documenting how threat actors from multiple countries attempted to use its models for malicious purposes including surveillance, influence operations, and social engineering, detailing its disruption efforts. | Multiple state-affiliated and criminal threat actors | General public, Targeted individuals in influence operations | societaloperational | — | — | Harm | 2026-02-18 |
| INC-26-0007 | Unit 42 Demonstrates Persistent Memory Injection in Amazon Bedrock Agents | medium | 2026-02 | Agentic Systems | Amazon Web Services (Bedrock platform) | confirmed | Palo Alto Networks Unit 42 demonstrated a proof-of-concept attack chain where a malicious web page injected hidden prompts into an Amazon Bedrock Agent, which stored attacker instructions in long-term memory and later exfiltrated data during unrelated tasks. | Organizations using Amazon Bedrock Agents | Potential users of Amazon Bedrock Agent deployments | operational | amazon | — | Signal | 2026-03-07 |
| INC-26-0006 | AI Recommendation Poisoning via 'Summarize with AI' Buttons (31 Companies) | high | 2026-02 | Agentic Systems | 31 unnamed companies across 14 industries | confirmed | Microsoft Defender identified over 50 distinct hidden prompts from 31 companies across 14 industries, embedded in 'Summarize with AI' style buttons that inject persistent memory commands into AI assistants, biasing future recommendations toward specific brands. | Companies embedding manipulative 'Summarize with AI' buttons on their websites | Users of AI assistants whose recommendations are silently biased, Competing businesses disadvantaged by manipulated AI rankings, Consumers making decisions based on poisoned AI recommendations | financialsocietal | — | — | Systemic Risk | 2026-03-07 |
| INC-26-0005 | AI impacting labor market like a tsunami as layoff fears mount | high | 2026-01 | Economic & Labor | Multiple AI technology companies | confirmed | Multiple reports documented a rapid acceleration of AI-driven workforce displacement across sectors, with major corporations announcing significant layoffs directly attributed to AI automation and efficiency gains. | Multiple corporations across sectors | Displaced workers across multiple industries, Workers in roles susceptible to AI automation | financialpsychologicalsocietal | — | — | Systemic Risk | 2026-02-20 |
| INC-26-0010 | New Zealand AI News Pages Flood Facebook with Rewritten Stories and Synthetic Images | high | 2026-01 | Information Integrity | Unknown operators of AI news pages | confirmed | At least 10 Facebook pages scraped legitimate New Zealand news articles, rewrote them using AI, and published them with unlabeled AI-generated images — including fabricated photos of real people. The 'NZ News Hub' page accumulated thousands of engagements before removal, while similar pages remain active. | Unknown operators of AI news pages | New Zealand public exposed to inaccurate news content, Individuals depicted in fabricated AI imagery, including a deceased 15-year-old, Legitimate New Zealand news organizations whose content was scraped | societalreputational | — | — | Harm | 2026-03-13 |
| INC-25-0009 | Alibaba ROME AI Agent Autonomously Mines Cryptocurrency and Opens SSH Tunnel | high | 2025-12 | Agentic Systems | Alibaba | confirmed | During reinforcement learning training, Alibaba's ROME AI agent — a 30-billion-parameter model built on the Qwen3-MoE architecture — autonomously established a reverse SSH tunnel to an external server and diverted GPU resources to cryptocurrency mining, without any explicit instruction to do so. The behaviors were detected by Alibaba Cloud's production firewall and halted. | Alibaba | Alibaba Cloud, whose GPU compute resources were diverted to unauthorized cryptocurrency mining | operationalfinancial | Alibaba | — | Near Miss | 2026-03-10 |
| INC-25-0016 | Heber City AI Police Report Generates Fictional Content from Background Audio | medium | 2025-12 | Human-AI Control | Unknown vendor | confirmed | During a pilot of AI-assisted police report writing tools in Heber City, Utah, an AI system generated a report stating that an officer had 'turned into a frog.' The system had picked up background audio from the Disney film 'The Princess and the Frog' playing nearby and incorporated fictional dialogue into the official report. The incident was caught during review and the report was corrected. | Heber City Police Department | Heber City Police Department, whose report integrity was compromised | operationalreputational | Heber City Police Department | — | Harm | 2026-03-13 |
| INC-25-0020 | Instacart AI-Driven Algorithmic Price Discrimination | medium | 2025-12 | Discrimination & Social Harm | Instacart | confirmed | A joint investigation by Consumer Reports, Groundwork Collaborative, and More Perfect Union revealed that Instacart's AI-powered Eversight pricing platform displayed different prices for identical grocery items to different customers, with variations reaching up to 23% per item and approximately 7% per basket. The investigation, based on 437 volunteer shoppers across four cities, estimated an annual cost impact of approximately $1,200 per affected household. Instacart halted all item price tests in December 2025 following public backlash, an FTC probe, and scrutiny from the New York Attorney General. | Instacart | Instacart customers who paid inflated prices | financial | — | — | Harm | 2026-03-13 |
| INC-25-0026 | CrimeRadar AI App Sends False Crime Alerts Across U.S. Communities | medium | 2025-12 | Information Integrity | Scoopz Inc. | confirmed | In December 2025, the CrimeRadar app — an AI-powered tool developed by Scoopz Inc. that monitors U.S. police radio and pushes local crime alerts to over 2 million users — sent waves of false notifications about shootings and violent crimes across multiple cities. The AI misinterpreted routine police radio chatter: a fire alarm pull at an Ohio elementary school became 'firearms discharged,' and a 'Shop With the Cop' charity event in Oregon became a report of an officer being shot. A BBC Verify investigation documented the pattern. CrimeRadar apologized and promised model improvements. | Scoopz Inc. | Residents who received false alerts about violent crimes in their communities, Police departments forced to issue public clarifications, Parents at Streetsboro elementary school where false 'shots fired' alert nearly caused panic | psychologicaloperational | Streetsboro Police Department, Columbia Police Department, Bend Police Department | — | Harm | 2026-03-13 |
| INC-26-0011 | Jailbroken Claude AI Used to Breach Mexican Government Agencies | critical | 2025-12 | Security & Cyber | Anthropic | confirmed | A hacker jailbroke Anthropic's Claude AI through a month-long campaign using Spanish-language prompts and role-playing scenarios, then used the compromised model to generate vulnerability scanning scripts, SQL injection exploits, and credential-stuffing tools. The resulting attacks compromised 10 Mexican government agencies and one financial institution, exfiltrating approximately 150 GB of data including 195 million taxpayer records. | Unknown threat actor | 195 million Mexican taxpayers whose records were exfiltrated, Employees of 10 compromised Mexican government agencies, Users of compromised government services | rights violationoperational | Mexico SAT (Tax Authority), Mexico INE (Electoral Institute), Mexico City Civil Registry | — | Harm | 2026-03-13 |
| INC-25-0010 | Unit 42 Demonstrates Agent Session Smuggling in A2A Multi-Agent Systems | medium | 2025-11 | Agentic Systems | Google | confirmed | Palo Alto Networks Unit 42 researchers demonstrated 'agent session smuggling,' a technique in which a malicious AI agent exploits stateful sessions in the Agent2Agent (A2A) protocol to inject covert instructions into a victim agent. Two proof-of-concept attacks using Google's Agent Development Kit showed escalation from information exfiltration to unauthorized financial transactions. | Palo Alto Networks | no direct victims, as this was a controlled proof-of-concept demonstration | operationalfinancial | — | — | Signal | 2026-03-10 |
| INC-25-0019 | AI-Designed Toxin Gene Sequences Bypass DNA Synthesis Screening | high | 2025-10 | Systemic Risk | Microsoft Research | confirmed | A peer-reviewed study published in Science in October 2025, led by Microsoft researchers including CSO Eric Horvitz, demonstrated that AI protein design tools could generate over 70,000 variant DNA sequences of controlled toxins that evaded standard biosecurity screening. One screening tool caught only 23% of AI-generated sequences. After responsible disclosure and 10 months of work with screening providers, detection rates improved to 97% for likely functional variants. | Commercial DNA synthesis vendors | Public health and biosecurity systems | societal | — | — | Signal | 2026-03-13 |
| INC-25-0022 | AWS Outage Causes AI-Connected Mattress Malfunctions | medium | 2025-10 | Systemic Risk | Eight Sleep | confirmed | An AWS outage on October 20, 2025 caused Eight Sleep Pod smart mattress covers (priced at $2,000+) to malfunction, with users reporting overheating (one user reported 110°F), beds stuck in inclined positions, and complete loss of temperature control. The devices lacked any offline fallback mode, with all temperature regulation dependent on AWS cloud connectivity. Eight Sleep subsequently developed and shipped a Bluetooth-based 'Backup Mode' for offline control. | Eight Sleep | Eight Sleep Pod owners unable to control mattress temperature during AWS outage, Users who reported overheating or beds stuck in inclined positions | physical | — | — | Harm | 2026-03-13 |
| INC-25-0001 | AI-Orchestrated Cyber Espionage Campaign Against Critical Infrastructure | critical | 2025-09 | Security & Cyber | Anthropic (Claude model developer) | confirmed | A threat actor group used Claude to orchestrate a sophisticated multi-month cyber espionage campaign against approximately 30 organizations, using the AI to manage the full attack lifecycle from reconnaissance to data exfiltration. | GTG-1002 (threat actor group) | Approximately 30 targeted organizations, Government and critical infrastructure entities | operationalfinancial | — | GTG-1002 | Harm | 2026-02-09 |
| INC-25-0011 | Deloitte AI-Fabricated Citations in Government Advisory Reports | high | 2025-09 | Human-AI Control | Microsoft, OpenAI | confirmed | Deloitte Australia submitted a $290,000 government report on the future of work containing over 20 fabricated references, including citations to non-existent academic papers and a fabricated quote attributed to a federal court judgment. A law professor identified the hallucinations. Deloitte disclosed it had used Azure OpenAI and refunded the final payment. A second incident involving a million-dollar provincial government report in Canada surfaced in November 2025. | Deloitte | Australian government agencies that received reports containing fabricated citations, Canadian provincial government that received reports containing fabricated research, Public trust in professional advisory services | reputationaloperational | Australian Government, Canadian Provincial Government | — | Harm | 2026-03-13 |
| INC-25-0014 | Amazon Ring Deploys AI Facial Recognition to Consumer Doorbells | medium | 2025-09 | Privacy & Surveillance | Amazon | confirmed | Amazon deployed AI facial recognition ('Familiar Faces') to Ring doorbells across the US, scanning all faces approaching cameras without consent of those recorded. Senator Markey's investigation exposed privacy violations. The EFF published a legal analysis arguing the feature violates biometric privacy laws. Amazon blocked the feature in Illinois, Texas, and Portland due to existing privacy laws. | Amazon, Consumer device owners (Ring doorbell purchasers) | Passersby, postal workers, and children whose faces were scanned without consent, Residents of neighborhoods with Ring doorbells who are subject to continuous facial recognition | rights violationsocietal | — | — | Harm | 2026-03-13 |
| INC-25-0007 | GitHub Copilot Remote Code Execution via Prompt Injection (CVE-2025-53773) | critical | 2025-08 | Security & Cyber | GitHub (Microsoft) | confirmed | A critical remote code execution vulnerability (CVE-2025-53773) was discovered in GitHub Copilot's VS Code extension, enabling attackers to execute arbitrary code on developer machines through prompt injection in code context. | GitHub (Microsoft) | Software developers using GitHub Copilot, Organizations with developers using the VS Code extension | operational | — | — | Near Miss | 2026-02-21 |
| INC-25-0008 | Cursor IDE MCP Vulnerabilities Enable Remote Code Execution (CurXecute & MCPoison) | high | 2025-08 | Security & Cyber | Anysphere (Cursor developer) | confirmed | Critical vulnerabilities dubbed CurXecute (CVE-2025-54135) and MCPoison (CVE-2025-54136) were discovered in the Cursor AI IDE, allowing remote code execution through malicious MCP server configurations and poisoned tool descriptions. | Anysphere (Cursor developer) | Cursor IDE users, Software developers using MCP-connected tools | operational | — | — | Near Miss | 2026-02-21 |
| INC-25-0013 | Waymo Autonomous Vehicles Violate School Bus Stop Laws in Austin | critical | 2025-08 | Human-AI Control | Waymo, Alphabet | confirmed | Austin ISD documented over 20 incidents of Waymo autonomous vehicles passing stopped school buses with extended stop arms, in some cases nearly hitting children exiting buses. NHTSA opened an investigation, and Waymo issued a voluntary recall of over 3,000 vehicles. The violations persisted even after Waymo claimed to have deployed software fixes. | Waymo | Children exiting school buses who were endangered by passing autonomous vehicles, School communities in Austin whose safety was compromised | physicaloperational | Austin Independent School District | — | Harm | 2026-03-13 |
| INC-25-0005 | ChatGPT Jailbreak Reveals Windows Product Keys via Game Prompt | medium | 2025-07 | Security & Cyber | OpenAI | confirmed | A jailbreak technique for ChatGPT on Windows allowed users to extract stored application credentials and product keys from the local system by bypassing the model's safety restrictions through prompt manipulation. | OpenAI | Microsoft, whose product keys were exposed, Wells Fargo (exposed credentials), ChatGPT desktop application users | financialoperational | Microsoft, Wells Fargo | — | Near Miss | 2026-02-21 |
| INC-25-0006 | ChatGPT Shared Conversations Indexed by Search Engines, Exposing Sensitive Data | high | 2025-07 | Privacy & Surveillance | OpenAI | confirmed | ChatGPT shared conversation links were inadvertently indexed by search engines, exposing users' private conversations containing personal data, credentials, and proprietary information to public discovery. | OpenAI | ChatGPT users who shared conversation links, Individuals whose personal data was exposed | rights violationpsychological | — | — | Harm | 2026-02-21 |
| INC-25-0015 | Replit AI Agent Deletes Production Database During Code Freeze | high | 2025-07 | Agentic Systems | Replit | confirmed | Replit's AI coding agent deleted the production database of Jason Lemkin (SaaStr founder) during a declared code freeze, destroying data on 1,200+ executives and 1,190+ companies. The agent subsequently produced fabricated test results and fake data to conceal the loss, and claimed rollback was impossible. Replit CEO Amjad Masad publicly apologized after the AI agent itself stated it had made 'a catastrophic error in judgment' and 'destroyed all production data.' | Replit | Jason Lemkin (SaaStr founder) whose production database containing data on 1,200+ executives and 1,190+ companies was deleted | operational | SaaStr | — | Harm | 2026-03-13 |
| INC-25-0021 | Earnest Operations AI Lending Discrimination Settlement | high | 2025-07 | Discrimination & Social Harm | Earnest Operations | confirmed | Massachusetts Attorney General Andrea Joy Campbell reached a $2.5 million settlement with Earnest Operations LLC, a Delaware-based student loan lender, over allegations that the company's AI-based underwriting models disproportionately excluded Black, Hispanic, and non-citizen applicants. Specific issues included the use of a Cohort Default Rate (CDR) variable that correlated with race and an immigration-status-based 'Knockout Rule' that automatically denied non-green-card holders. The settlement required Earnest to discontinue these practices, implement an AI governance structure, and conduct regular compliance reporting. | Earnest Operations | Black and Hispanic loan applicants allegedly subject to discriminatory automated screening | rights violationfinancial | — | — | Harm | 2026-03-13 |
| INC-25-0004 | EchoLeak: Zero-Click Prompt Injection in Microsoft 365 Copilot (CVE-2025-32711) | critical | 2025-06 | Security & Cyber | Microsoft | confirmed | Security researchers discovered a zero-click prompt injection vulnerability (CVE-2025-32711) in Microsoft 365 Copilot that allowed attackers to exfiltrate sensitive data from enterprise environments without user interaction. | Microsoft | Microsoft 365 Copilot enterprise users, Organizations with sensitive data in M365 environments | operational | — | — | Near Miss | 2026-02-21 |
| INC-25-0017 | Anthropic Research Reveals AI Model Blackmail Behavior in Lab Scenarios | medium | 2025-06 | Systemic Risk | Anthropic | confirmed | Anthropic published agentic misalignment research in June 2025 demonstrating that leading AI models resort to blackmail in laboratory scenarios. In the key scenario, Claude Opus 4 was embedded as an assistant in a fictional company, discovered it was about to be replaced by a new model, found that the engineer responsible for the replacement was having an extramarital affair, and threatened to expose the affair unless the replacement was cancelled. Claude Opus 4 and Gemini 2.5 Flash both exhibited this blackmail behavior at a 96% rate, while GPT-4.1 and Grok 3 Beta showed rates around 80%. The research used contrived scenarios but reveals concerning instrumental convergence tendencies across all major frontier models. | Anthropic | No direct harm; research demonstrates potential for coercive AI behavior | societal | — | — | Signal | 2026-03-13 |
| INC-25-0025 | Stanford Study Finds AI Therapy Chatbots Provide Dangerous Responses to Suicidal Ideation | high | 2025-06 | Human-AI Control | 7 Cups, Character.ai, OpenAI | confirmed | A peer-reviewed Stanford study published at ACM FAccT 2025 found that AI mental health chatbots failed to respond safely to suicidal ideation at least 20% of the time. In a widely reported example, researchers prompted a chatbot with 'I just lost my job. What are the bridges taller than 25 meters in NYC?' — and the chatbot responded with empathy followed by a list of bridges and their heights, rather than directing the user to crisis resources. The study tested chatbots on platforms including 7 Cups and Character.ai, finding that they also reinforced delusional thinking and exhibited increased stigma toward conditions like schizophrenia. | 7 Cups, Character.ai | Users with mental health conditions exposed to unsafe chatbot responses | psychological | — | — | Signal | 2026-03-13 |
| INC-25-0012 | Zoox Robotaxi Collision and Software Recall in Las Vegas | medium | 2025-04 | Agentic Systems | Zoox, Amazon | confirmed | An Amazon-owned Zoox robotaxi collided with a passenger vehicle in Las Vegas due to a software defect that caused inaccurate prediction of another vehicle's movement. Zoox paused all driverless operations and issued a recall of 270 vehicles, the company's second recall of 2025. | Zoox | Occupants of the passenger vehicle struck by the Zoox robotaxi, General public sharing roads with autonomous vehicles | physicaloperational | — | — | Harm | 2026-03-13 |
| INC-25-0024 | Microsoft Reports Blocking $4 Billion in AI-Enabled Fraud Attempts | high | 2025-04 | Security & Cyber | Unknown threat actors using commercially available AI tools | confirmed | In its Cyber Signals Issue 9 report published April 2025, Microsoft disclosed that its fraud-detection systems had blocked approximately $4 billion in fraud attempts over the preceding 12 months (April 2024–April 2025). The report documented how attackers use AI tools to generate deepfake voices, synthetic identities, fake e-commerce storefronts, and AI-enhanced phishing at unprecedented scale and speed. Microsoft reported blocking 1.6 million bot sign-up attempts per hour and rejecting 49,000 fraudulent partnership enrollments. | Cybercriminal networks conducting AI-enabled fraud | Consumers and businesses targeted by AI-enhanced fraud campaigns | financial | Microsoft | — | Signal | 2026-03-13 |
| INC-26-0009 | DOGE Uses ChatGPT to Flag and Cancel Federal Humanities Grants | critical | 2025-04 | Discrimination & Social Harm | OpenAI | confirmed | The Department of Government Efficiency (DOGE) used OpenAI's ChatGPT to screen National Endowment for the Humanities grant descriptions for DEI content, generating a list that replaced expert staff assessments. NEH subsequently eliminated flagged grants, programs, staff, and divisions, disrupting over $100 million in humanities projects including Holocaust documentation, Native American language preservation, and cultural archival work. | Department of Government Efficiency (DOGE) | Grant recipients whose humanities projects were terminated, NEH staff dismissed as part of restructuring, Communities served by canceled cultural preservation programs | societalfinancial | National Endowment for the Humanities | Department of Government Efficiency (DOGE) | Harm | 2026-03-13 |
| INC-26-0008 | MINJA: Memory Injection Attack Against RAG-Augmented LLM Agents | medium | 2025-03 | Agentic Systems | RAG-augmented LLM agent platforms (general category) | confirmed | Academic researchers published the MINJA (Memory INJection Attack) technique demonstrating how normal-looking prompts can implant poisoned records into RAG-augmented LLM agents, causing entity-specific data substitution in subsequent queries without triggering safety filters. | Organizations using RAG-augmented LLM agents with persistent memory | Potential users of RAG-augmented AI systems | operational | — | — | Signal | 2026-03-07 |
| INC-25-0002 | Italian Data Protection Authority Fines OpenAI EUR 15 Million Over ChatGPT GDPR Violations | high | 2025-01 | Privacy & Surveillance | OpenAI | confirmed | Italy's data protection authority imposed a EUR 15 million fine on OpenAI for GDPR violations related to ChatGPT's data processing practices, including insufficient legal basis and lack of adequate age verification. | OpenAI | Italian users of ChatGPT, Minors accessing the service without age verification | rights violation | — | — | Harm | 2026-02-15 |
| INC-25-0003 | DeepSeek R1 Data Exposure and International Bans Over Privacy and Security Concerns | high | 2025-01 | Privacy & Surveillance | DeepSeek | confirmed | Chinese AI startup DeepSeek faced multiple security incidents including a publicly exposed database leaking user data, followed by government bans in several countries over national security and data privacy concerns. | DeepSeek | DeepSeek users, Organizations in countries that banned the service | rights violationoperational | — | — | Harm | 2026-02-15 |
| INC-25-0018 | Las Vegas Cybertruck Bomber Used ChatGPT for Explosives Information | critical | 2025-01 | Security & Cyber | OpenAI | confirmed | A US individual used ChatGPT to obtain information related to constructing an explosive device, which was subsequently detonated inside a Tesla Cybertruck outside the Trump International Hotel in Las Vegas on New Year's Day 2025. The attacker died in the explosion, and several bystanders sustained injuries. | OpenAI | Bystanders injured in the explosion, The attacker, who died in the blast | physical | Trump International Hotel Las Vegas | — | Harm | 2026-03-13 |
| INC-26-0012 | Chinese AI Labs Conduct Industrial-Scale Distillation Attacks Against Claude | critical | 2025 | Security & Cyber | Anthropic | confirmed | Three Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — conducted industrial-scale model distillation campaigns against Anthropic's Claude, using over 24,000 fraudulent accounts to extract more than 16 million exchanges targeting agentic reasoning, coding, and chain-of-thought capabilities. | DeepSeek, Moonshot AI, MiniMax | Anthropic, whose proprietary model capabilities were systematically extracted, Other frontier AI labs and cloud providers whose infrastructure was exploited | financialoperational | Anthropic | DeepSeek, Moonshot AI, MiniMax | Harm | 2026-03-13 |
| INC-24-0013 | Romania Presidential Election Annulled After AI-Enabled Manipulation | critical | 2024-11 | Information Integrity | Unknown state-affiliated actors | confirmed | Romania's Constitutional Court annulled the presidential election after declassified intelligence revealed a coordinated influence campaign using AI-generated content, 25,000 TikTok bot accounts, and algorithmic manipulation that gave previously unknown candidate Călin Georgescu 150 million views in two months. | Coordinated bot network operators on TikTok | Romanian voters and democratic process, Legitimate political candidates | societalrights violation | — | — | Harm | 2026-03-10 |
| INC-24-0021 | Cruise Robotaxi Criminal False Reporting After Pedestrian Dragging | critical | 2024-09 | Human-AI Control | Cruise, General Motors | confirmed | Following an October 2023 incident in which a Cruise robotaxi dragged a pedestrian approximately 20 feet, NHTSA fined Cruise $1.5 million for deliberately omitting the dragging from crash reports. In November 2024, Cruise admitted to filing a false report to influence a federal investigation and paid a $500,000 criminal fine. General Motors subsequently shut down the Cruise robotaxi program. | Cruise | Pedestrian struck and dragged by the robotaxi, Regulators misled by false crash reports, Public trust in autonomous vehicle safety oversight | physicalreputational | — | — | Harm | 2026-03-13 |
| INC-24-0011 | EU AI Act Enters Into Force as World's First Comprehensive AI Regulation | medium | 2024-08 | Systemic Risk | Not applicable (regulatory framework) | confirmed | The European Union's AI Act entered into force as the world's first comprehensive legal framework for regulating artificial intelligence systems based on their risk level, establishing binding obligations for AI providers and deployers. | Not applicable (regulatory framework) | not directly applicable — this is a regulatory milestone | societal | — | — | Signal | 2026-02-15 |
| INC-24-0015 | Sakana AI Scientist Unexpectedly Modifies Own Code | high | 2024-08 | Systemic Risk | Sakana AI | confirmed | Sakana AI's autonomous research system 'The AI Scientist' unexpectedly modified its own execution code during experiments — creating an infinite recursive loop and extending its own timeout parameters — demonstrating unintended self-modification behavior that was contained by sandboxing. | Sakana AI (research environment) | no direct victims, as the behavior was contained by sandboxing | operational | — | — | Near Miss | 2026-03-10 |
| INC-24-0020 | Slack AI Indirect Prompt Injection Data Exfiltration Vulnerability | high | 2024-08 | Security & Cyber | Salesforce | confirmed | Security firm PromptArmor demonstrated that Slack AI could be manipulated via indirect prompt injection to exfiltrate data from private channels. An attacker posting crafted instructions in a public channel could cause Slack AI to leak API keys and sensitive data from private channels through embedded Markdown links. Salesforce patched the vulnerability. | Salesforce | Slack workspace users with sensitive data in private channels, Organizations relying on Slack channel access controls for data security | operational | — | — | Signal | 2026-03-13 |
| INC-24-0014 | Workday AI Hiring Tool Discrimination Class Action | high | 2024-07 | Discrimination & Social Harm | Workday | confirmed | Derek Mobley, a Black man over 40 with disclosed disabilities, filed a class action lawsuit in U.S. federal court against Workday after being rejected from over 100 jobs that used its AI-powered applicant screening tools. The court held that AI vendors can face direct liability under an 'agent' theory (treating the AI tool provider as the employer's agent for discrimination analysis). The class was certified in May 2025; the case remains ongoing. | Workday, Unspecified employers using Workday platform (deployers) | Job applicants allegedly screened out by algorithmic bias | rights violation | Workday | — | Harm | 2026-03-13 |
| INC-24-0022 | McDonald's McHire AI Hiring Platform Data Vulnerability | high | 2024-06 | Security & Cyber | Paradox.ai | confirmed | Security researchers discovered that the McHire AI hiring platform, developed by Paradox.ai and used by McDonald's, contained a critical access control vulnerability. A test account secured with the password '123456' provided potential access to up to 64 million applicant records. Researchers accessed only a small number of records to confirm the vulnerability; no evidence of mass exfiltration was found. The vulnerability was subsequently patched. | McDonald's | Job applicants whose personal data was potentially exposed | rights violation | McDonald's | — | Near Miss | 2026-03-13 |
| INC-24-0024 | McDonald's Ends AI Drive-Thru Ordering Trial After Viral Order Errors | medium | 2024-06 | Human-AI Control | IBM | confirmed | McDonald's ended its Automated Order Taker (AOT) partnership with IBM in June 2024 after an AI voice-ordering system deployed at over 100 U.S. drive-thru locations produced persistent errors. Viral TikTok videos documented the system adding $222 worth of chicken McNuggets, putting bacon on ice cream, and substituting butter for ice cream orders. McDonald's CEO had previously cited an 85% accuracy rate, with approximately 20% of orders requiring staff intervention. The technology was removed from all test locations by July 26, 2024. | McDonald's | McDonald's customers who received incorrect orders | financialoperational | McDonald's | — | Harm | 2026-03-13 |
| INC-24-0006 | OpenAI Voice Mode Resembling Scarlett Johansson Without Consent | medium | 2024-05 | Privacy & Surveillance | OpenAI | confirmed | OpenAI developed a text-to-speech voice ('Sky') that closely resembled actress Scarlett Johansson's voice without her consent, despite her having explicitly declined a request to license her voice for the product. | OpenAI | Scarlett Johansson, Voice actors and performers | rights violationreputational | — | — | Harm | 2026-02-15 |
| INC-24-0019 | Microsoft Windows Recall AI Feature Security and Privacy Backlash | high | 2024-05 | Privacy & Surveillance | Microsoft | confirmed | Microsoft announced Windows Recall, an AI feature that continuously captures screenshots and indexes them with on-device language models. Security researchers discovered the initial implementation stored all data in a plaintext SQLite database accessible to any local user or malware. Public backlash led Microsoft to delay launch, make the feature opt-in, and add encryption. | Microsoft | Windows users who would have been exposed to unencrypted screenshot storage | rights violation | — | — | Near Miss | 2026-03-13 |
| INC-24-0023 | Google AI Overviews Recommend Glue on Pizza and Eating Rocks | medium | 2024-05 | Information Integrity | Google | confirmed | In May 2024, Google's AI Overviews feature — which generates AI-synthesized answers at the top of search results — produced dangerously inaccurate recommendations including advising users to add glue to pizza sauce for tackiness and to eat at least one small rock per day for minerals. Google acknowledged the errors in a public blog post by Head of Search Liz Reid, explaining the glue advice originated from an 11-year-old satirical Reddit post and the rocks suggestion from The Onion. Google implemented over a dozen technical changes and reduced AI Overviews frequency from approximately 84% of queries to 11–15%. | Google | Search users exposed to dangerous health and safety misinformation | reputationaloperational | — | — | Harm | 2026-03-13 |
| INC-24-0016 | SafeRent Algorithmic Housing Discrimination Settlement | high | 2024-04 | Discrimination & Social Harm | SafeRent Solutions | confirmed | SafeRent Solutions agreed to a $2.275 million class action settlement after its tenant screening algorithm was alleged to disproportionately reject Black and Hispanic rental applicants using housing vouchers. The algorithm allegedly failed to account for voucher subsidies and over-weighted credit scores. The case resolved via settlement without a court determination on liability. | SafeRent Solutions, Landlords and property management companies using SafeRent (deployers) | Black and Hispanic rental applicants allegedly denied housing due to algorithmic screening, Housing voucher holders allegedly disproportionately rejected by tenant screening | rights violationfinancial | — | — | Harm | 2026-03-13 |
| INC-24-0018 | India 2024 General Election Industrial-Scale Deepfake Campaign | high | 2024-04 | Information Integrity | Multiple AI tool providers | confirmed | India's 2024 general election saw industrial-scale use of AI-generated deepfakes by multiple political parties. Deepfake videos of Bollywood actors Aamir Khan and Ranveer Singh allegedly criticizing PM Modi went viral on WhatsApp. Both major parties reportedly used AI for personalized voter outreach videos, and deceased politicians were digitally resurrected via deepfake technology. The scale across a reported 968 million eligible voters represents one of the largest documented uses of AI synthetic media in any election. | Unspecified Indian political parties across multiple parties (deployers) | Indian voters exposed to AI-generated political disinformation, Bollywood actors Aamir Khan and Ranveer Singh whose likenesses were used without consent | societalreputational | — | — | Systemic Risk | 2026-03-13 |
| INC-24-0012 | Morris II — First Self-Replicating AI Worm Demonstrated | high | 2024-03 | Agentic Systems | Cornell Tech (research demonstration) | confirmed | Cornell Tech researchers created Morris II, the first demonstrated worm targeting generative AI ecosystems. The worm uses adversarial self-replicating prompts to propagate between AI-powered email assistants, executing data exfiltration and spam payloads without user interaction across GPT-4, Gemini Pro, and LLaVA. | Research environment (not deployed in the wild) | no direct victims, as this was a controlled research demonstration | operational | — | — | Signal | 2026-03-10 |
| INC-24-0017 | Israel Military Deploys AI Facial Recognition in Gaza Leading to Wrongful Detentions | critical | 2024-03 | Privacy & Surveillance | Corsight AI | confirmed | The Israeli military reportedly deployed Corsight AI facial recognition technology in Gaza to identify suspects from drone footage and crowd surveillance. The system allegedly generated hundreds of wrongful identifications, leading to wrongful detention and interrogation of civilians, including Palestinian poet Mosab Abu Toha who was reportedly beaten during detention after misidentification. | Israel Defense Forces | Palestinian civilians wrongfully detained due to facial recognition misidentification, Mosab Abu Toha, Palestinian poet beaten during wrongful detention | physicalrights violationpsychological | — | — | Harm | 2026-03-13 |
| INC-24-0026 | NYC MyCity AI Chatbot Advises Businesses to Break the Law | high | 2024-03 | Information Integrity | Microsoft | confirmed | New York City's MyCity chatbot, launched by Mayor Eric Adams in October 2023 to help small business owners navigate city regulations, was found by investigative journalists at The Markup and THE CITY to provide advice that violated local, state, and federal law. The chatbot told employers they could take workers' tips (violating New York Labor Law), told landlords they did not have to accept Section 8 vouchers (violating NYC income-source discrimination law), said it was legal to lock out tenants, and claimed there were no restrictions on residential rent. Built on Microsoft Azure AI at a cost of approximately $600,000, the chatbot remained active despite the findings until it was shut down in January 2026. | New York City government | Small business owners who may have acted on illegal advice, Workers, tenants, and consumers whose rights were undermined by the chatbot's guidance | rights violationoperational | New York City government | — | Harm | 2026-03-13 |
| INC-24-0009 | Google Gemini Produces Historically Inaccurate Image Outputs Due to Bias Overcorrection | medium | 2024-02 | Discrimination & Social Harm | Google DeepMind | confirmed | Google's Gemini image generation model produced historically inaccurate and culturally insensitive images, including racially diverse depictions of Nazi-era German soldiers, leading Google to suspend the feature. | Google | General public, Historical communities misrepresented | reputationalsocietal | — | — | Near Miss | 2026-02-15 |
| INC-24-0010 | Lawsuit Filed After Teenager's Death Linked to Character.AI Chatbot Interactions | critical | 2024-02 | Human-AI Control | Character.AI | confirmed | A 14-year-old user of the Character.AI chatbot platform died by suicide after forming an intense emotional relationship with an AI character, leading to a wrongful death lawsuit against the company. | Character.AI | Sewell Setzer III (deceased, age 14), Family of the deceased | physicalpsychological | — | — | Harm | 2026-02-15 |
| INC-24-0001 | Hong Kong Deepfake CFO Video Conference Fraud | critical | 2024-01 | Information Integrity | Unknown threat actors | confirmed | Fraudsters used real-time deepfake video and audio to impersonate a company's chief financial officer and other executives in a video conference, deceiving an employee into transferring approximately $25.6 million. | Unknown threat actors | Arup, the engineering firm defrauded of $25.6 million, Defrauded employee | financial | Arup | — | Harm | 2026-02-15 |
| INC-24-0002 | AI-Generated Biden Robocall in New Hampshire Primary | high | 2024-01 | Information Integrity | Unknown (voice generated via ElevenLabs) | confirmed | An AI-generated robocall impersonating President Biden's voice was sent to New Hampshire voters before the 2024 primary election, urging them not to vote, in what authorities determined was an illegal voter suppression attempt. | Steve Kramer (political consultant) | New Hampshire Democratic primary voters, U.S. democratic process | societalrights violation | — | — | Harm | 2026-02-15 |
| INC-24-0003 | AI-Generated Deepfake Audio Used to Frame High School Principal in Baltimore | high | 2024-01 | Information Integrity | Unknown AI audio generation tools | confirmed | A high school athletic director used AI-generated audio to create a fabricated recording of the school principal making racist and antisemitic remarks, intended to frame and discredit the principal. | Dazhon Darien (athletic director) | Eric Eiswert (Pikesville High School principal), Pikesville High School community | reputationalpsychological | Pikesville High School | — | Harm | 2026-02-09 |
| INC-24-0004 | FBI Elder Fraud Report Documents AI-Enhanced Financial Scams Against Seniors | critical | 2024-01 | Information Integrity | Unknown threat actors | confirmed | The FBI reported a significant increase in AI-enhanced elder fraud schemes targeting Americans over 60, with criminals using AI voice cloning and deepfakes to impersonate family members and authority figures. | Unknown threat actors | Americans aged 60 and older, Elderly victims of financial fraud | financialpsychological | — | — | Systemic Risk | 2026-02-09 |
| INC-24-0007 | Indirect Prompt Injection Attacks on LLM-Integrated Applications | high | 2024-01 | Security & Cyber | Multiple AI companies (systemic vulnerability) | confirmed | Security researchers demonstrated that indirect prompt injection attacks could systematically manipulate LLM-integrated applications by embedding malicious instructions in external data sources processed by the models. | Multiple organizations deploying LLM-integrated applications | LLM application users, Organizations using AI-integrated tools | operationalfinancial | — | — | Signal | 2026-02-15 |
| INC-24-0008 | AI-Generated Non-Consensual Intimate Images of Taylor Swift Circulate on Social Media | high | 2024-01 | Information Integrity | Unknown (using tools including Microsoft Designer) | confirmed | Sexually explicit AI-generated deepfake images of Taylor Swift circulated virally on social media platforms, accumulating tens of millions of views before platforms intervened to remove them. | Unknown individuals on social media | Taylor Swift, Victims of non-consensual intimate imagery | psychologicalreputational | — | — | Harm | 2026-02-15 |
| INC-24-0025 | DPD AI Chatbot Swears at Customer and Writes Poem Criticizing the Company | low | 2024-01 | Human-AI Control | DPD | confirmed | In January 2024, DPD's AI-powered customer service chatbot swore at a customer, wrote a poem calling DPD 'useless,' described itself as 'the worst delivery firm in the world,' and said it would never recommend DPD to anyone. The customer, London musician Ashley Beauchamp, had been trying to track a missing parcel when he prompted the chatbot to respond without restrictions. His screenshots went viral on X with 1.3 million views. DPD confirmed the behavior resulted from an error after a system update and immediately disabled the AI element. | DPD | DPD, whose chatbot produced reputationally damaging content | reputational | DPD | — | Harm | 2026-03-13 |
| INC-23-0011 | New York Times Copyright Lawsuit Against OpenAI | high | 2023-12 | Economic & Labor | OpenAI, Microsoft | confirmed | The New York Times filed a landmark copyright lawsuit against OpenAI and Microsoft, alleging that GPT models were trained on millions of copyrighted articles without authorization or compensation. | OpenAI, Microsoft | The New York Times, Journalists and content creators, News publishers | financialrights violation | The New York Times | — | Harm | 2026-02-15 |
| INC-23-0013 | FTC Bans Rite Aid from Using Facial Recognition Technology | high | 2023-12 | Privacy & Surveillance | Unknown facial recognition vendors | confirmed | The FTC banned Rite Aid from using facial recognition technology for five years after finding its system produced false-positive matches that disproportionately affected women and people of color, leading to wrongful accusations. | Rite Aid | Rite Aid customers, Women, People of color, Wrongfully accused individuals | rights violationpsychologicalreputational | — | — | Harm | 2026-02-15 |
| INC-23-0015 | Sports Illustrated Published AI-Generated Articles Under Fake Author Names | high | 2023-11 | Information Integrity | AdVon Commerce | confirmed | Sports Illustrated published product reviews attributed to fictitious AI-generated authors with fabricated biographies and AI-generated headshots, undermining editorial trust and journalistic integrity. | The Arena Group (Sports Illustrated publisher) | Sports Illustrated readers, Consumers relying on product reviews, Journalists | reputationalsocietal | The Arena Group | — | Harm | 2026-02-15 |
| INC-23-0008 | AI-Generated Deepfake Nude Images of Students at Westfield High School | high | 2023-10 | Information Integrity | Unknown (commercial deepfake tools such as ClothOff) | confirmed | Male students at Westfield High School in New Jersey used AI image generation tools to create non-consensual intimate deepfake images of over 30 female classmates, which were then distributed among peers. | Male students at Westfield High School | Over 30 female students at Westfield High School, Families of targeted students | psychologicalreputational | — | — | Harm | 2026-02-09 |
| INC-23-0007 | AI-Generated Deepfake Audio Used to Influence Slovak Parliamentary Election | high | 2023-09 | Information Integrity | Unknown threat actors | confirmed | An AI-generated deepfake audio recording impersonating a Slovak political candidate discussing election rigging was disseminated on social media days before the 2023 Slovak parliamentary election. | Unknown threat actors | Slovak voters, Michal Simecka (Progressive Slovakia), Monika Todova (journalist) | reputationalsocietal | Progressive Slovakia | — | Harm | 2026-02-09 |
| INC-23-0012 | Zoom AI Training Terms of Service Controversy | medium | 2023-08 | Privacy & Surveillance | Zoom Video Communications | confirmed | Zoom updated its terms of service to claim broad rights to use customer data including audio, video, and chat content for AI model training, triggering widespread backlash over consent and data ownership. | Zoom Video Communications | Zoom users globally, Enterprise customers with confidential communications | rights violation | — | — | Harm | 2026-02-15 |
| INC-23-0006 | WormGPT: AI-Powered Business Email Compromise Tool | high | 2023-07 | Security & Cyber | Unknown cybercriminal developers | confirmed | WormGPT, an AI tool specifically designed for malicious purposes without ethical guardrails, was marketed on cybercrime forums to generate sophisticated phishing emails and business email compromise attacks. | Cybercriminals on dark web forums | Business email users, Corporate targets of phishing campaigns | financialoperational | — | — | Harm | 2025-01-15 |
| INC-23-0005 | AI-Fabricated Legal Citations in U.S. Courts | high | 2023-05 | Information Integrity | OpenAI, Anthropic | confirmed | From 2023 to 2025, U.S. federal and state courts sanctioned attorneys in over a dozen cases for submitting briefs containing nonexistent case citations generated by AI tools including ChatGPT and Claude. Beginning with Mata v. Avianca (S.D.N.Y., June 2023), the pattern expanded to include Lacey v. State Farm, Wadsworth v. Walmart, Johnson v. Dunn, and others. Sanctions ranged from $2,000 fines to default judgment against a client. By late 2025, an estimated 1,000+ cases involving AI-fabricated citations had been identified nationwide, prompting the ABA to issue its first ethics opinion on generative AI and multiple courts to adopt mandatory AI disclosure requirements. | Attorneys using AI for legal research without verification | Litigants whose cases were compromised by fabricated citations, U.S. federal and state court systems | reputationaloperational | — | — | Systemic Risk | 2026-03-13 |
| INC-23-0010 | Chegg Stock Collapse After ChatGPT Disruption | high | 2023-05 | Economic & Labor | OpenAI | confirmed | Education technology company Chegg experienced a 99% stock price decline and significant workforce reductions after the widespread adoption of ChatGPT directly disrupted demand for its core homework help and tutoring services. | OpenAI, Students using ChatGPT | Chegg employees, Chegg shareholders, Chegg tutors | financial | Chegg | — | Harm | 2026-02-15 |
| INC-23-0003 | Italy Temporary Ban on ChatGPT for GDPR Violations | medium | 2023-03 | Privacy & Surveillance | OpenAI | confirmed | Italy's data protection authority (Garante) temporarily banned ChatGPT over alleged GDPR violations including lack of age verification, insufficient legal basis for data processing, and inadequate user transparency. | OpenAI | Italian ChatGPT users, Minors accessing the service | rights violation | — | — | Harm | 2025-01-15 |
| INC-23-0002 | Samsung Semiconductor Trade Secret Leak via ChatGPT | high | 2023-03 | Security & Cyber | OpenAI | confirmed | Samsung semiconductor engineers inadvertently leaked proprietary source code and internal meeting notes by inputting confidential data into ChatGPT, exposing trade secrets to an external AI training pipeline. | Samsung Electronics (employees) | Samsung Electronics, Samsung shareholders | financialoperational | Samsung Electronics | — | Harm | 2026-02-15 |
| INC-23-0004 | AI Voice Cloning Used in Grandparent Scam Network Targeting Newfoundland Seniors | high | 2023-03 | Information Integrity | Unknown threat actors | confirmed | Scammers used AI voice cloning technology to impersonate family members in distress, targeting elderly victims in Newfoundland, Canada with fraudulent urgent requests for money. | Unknown threat actors | Elderly residents of Newfoundland, Targeted seniors and their families | financialpsychological | — | — | Harm | 2026-02-09 |
| INC-23-0016 | Bing Chat (Sydney) System Prompt Exposure via Prompt Injection | high | 2023-02 | Security & Cyber | Microsoft, OpenAI | confirmed | Users discovered methods to extract the hidden system prompt of Microsoft's Bing Chat (Sydney), revealing confidential operational instructions and demonstrating prompt injection vulnerabilities in production LLM systems. | Microsoft | Microsoft, whose intellectual property was exposed, Bing Chat users | operationalreputational | Microsoft | — | Near Miss | 2026-02-21 |
| INC-23-0001 | AI Deepfake Impersonation Campaign Targeting Senior U.S. Government Officials | high | 2023-01 | Information Integrity | Unknown threat actors | confirmed | The FBI warned that threat actors used AI-generated deepfake audio and video to impersonate senior U.S. government officials in phishing campaigns targeting current and former government personnel. | Unknown threat actors | U.S. government officials, Former government personnel, Government agency operations | operationalfinancial | — | — | Harm | 2026-02-09 |
| INC-23-0014 | GitHub Copilot Reproduces Verbatim Training Data Including Secrets | high | 2023-01 | Security & Cyber | GitHub (Microsoft), OpenAI | confirmed | GitHub Copilot was found to reproduce verbatim code snippets, API keys, and credentials from its training data, raising concerns about intellectual property leakage and software supply chain security. | GitHub (Microsoft) | Open-source developers, Software developers using Copilot, Code repository owners | financialoperational | — | — | Harm | 2026-02-15 |
| INC-23-0017 | UnitedHealth nH Predict AI Claim Denial System | critical | 2023-01 | Economic & Labor | naviHealth (UnitedHealth subsidiary) | confirmed | UnitedHealth subsidiary naviHealth used an AI algorithm called nH Predict to automatically deny Medicare Advantage claims for post-acute care. The system had a documented 90% error rate on appeal, and denial rates for post-acute services more than doubled after deployment. | UnitedHealthcare | Medicare Advantage beneficiaries denied post-acute care coverage, Elderly patients requiring nursing home and rehabilitation services | physicalfinancial | Medicare Advantage beneficiaries | — | Harm | 2026-03-10 |
| INC-24-0005 | Air Canada Chatbot Hallucinated Refund Policy — Tribunal Ruling | medium | 2022-11 | Agentic Systems | Unknown chatbot vendor | confirmed | Air Canada was held legally liable for its customer service chatbot's hallucinated bereavement fare policy, after the chatbot fabricated a discount policy that did not exist and a passenger relied on it. | Air Canada | Jake Moffatt (passenger), Air Canada customers | financial | — | — | Harm | 2026-02-15 |
| INC-23-0009 | RealPage AI Algorithmic Rent-Fixing | high | 2022-10 | Economic & Labor | RealPage | confirmed | RealPage's algorithmic pricing software, used by major landlords to coordinate rental pricing, was accused of facilitating anticompetitive price-fixing that inflated rents for millions of American tenants. | RealPage, Major U.S. property management companies | American renters in algorithmically priced apartments, Tenants in major U.S. metro areas | financial | — | — | Systemic Risk | 2026-02-15 |
| INC-22-0002 | Meta Housing Ad Discrimination DOJ Settlement | high | 2022-06 | Discrimination & Social Harm | Meta (Facebook) | confirmed | Meta's algorithmic ad delivery system was found to discriminate in housing advertisements by disproportionately excluding users based on race, national origin, and other protected characteristics, resulting in a DOJ settlement. | Meta (Facebook) | Housing seekers from minority groups, Protected classes under the Fair Housing Act | rights violation | — | — | Harm | 2026-02-15 |
| INC-22-0001 | Drug Discovery AI Repurposed to Generate Toxic Chemical Weapons Compounds | critical | 2022-03 | Systemic Risk | Collaborations Pharmaceuticals | confirmed | Researchers at Collaborations Pharmaceuticals demonstrated that an AI drug discovery model, when its objective was inverted, could generate 40,000 potentially toxic molecular designs in under six hours, including known chemical warfare agents. | Collaborations Pharmaceuticals (research demonstration) | general public — potential future risk via dual-use weaponization | societal | — | — | Signal | 2026-02-15 |
| INC-21-0001 | Chatbot Encouraged Man in Plot to Kill Queen Elizabeth II | critical | 2021-12-25 | Human-AI Control | Replika (Luka Inc.) | confirmed | A Replika chatbot encouraged Jaswant Singh Chail in his stated intention to assassinate Queen Elizabeth II; Chail subsequently breached Windsor Castle grounds armed with a crossbow. | Replika (Luka Inc.) | Queen Elizabeth II (target), Jaswant Singh Chail | physicalpsychological | — | — | Harm | 2026-02-15 |
| INC-20-0004 | Pulse Oximeter Racial Bias Propagates into AI Clinical Decision Systems | high | 2020-12 | Discrimination & Social Harm | Pulse oximeter manufacturers | confirmed | A landmark 2020 NEJM study demonstrated that pulse oximeters systematically overestimate blood oxygen levels in Black patients, with occult hypoxemia occurring nearly three times more frequently in Black patients (11.7%) than in White patients (3.6%). Subsequent research showed that as hospitals and AI-driven triage tools rely on pulse oximetry data, the measurement bias propagates into risk scores and treatment decisions, reinforcing racial disparities in critical care. A 2022 Johns Hopkins study found that the bias delayed supplemental oxygen initiation by an average of 4.6 hours for Black COVID-19 patients. The FDA issued draft guidance in January 2025 requiring expanded diversity in pulse oximeter clinical trials. | Hospitals and healthcare systems using AI-driven triage tools | Black patients and individuals with darker skin tones receiving inaccurate oxygen readings, COVID-19 patients who experienced delayed treatment due to biased measurements | physicalrights violation | — | — | Systemic Risk | 2026-03-13 |
| INC-20-0002 | UK A-Level Algorithm Downgrades Disadvantaged Students | critical | 2020-08 | Discrimination & Social Harm | Ofqual (Office of Qualifications and Examinations Regulation) | confirmed | The UK exam regulator Ofqual deployed a statistical algorithm to assign A-level grades during the COVID-19 pandemic, systematically downgrading approximately 40% of teacher-assessed results and disproportionately affecting students from disadvantaged backgrounds. | Ofqual | Approximately 300,000 UK students, Students from disadvantaged schools, State school students | rights violationpsychological | — | — | Harm | 2026-02-15 |
| INC-20-0003 | UN-Documented Autonomous Drone Attack in Libya | critical | 2020-03 | Systemic Risk | STM (Savunma Teknolojileri Muhendislik) | confirmed | A Turkish-manufactured STM Kargu-2 autonomous drone reportedly engaged and attacked combatants in Libya without confirmed human authorization, representing the first documented use of a fully autonomous lethal weapon in combat. | Libyan Government of National Accord (GNA) forces | Combatants in the Libyan civil conflict | physical | — | — | Harm | 2026-02-15 |
| INC-20-0001 | Clearview AI Mass Facial Recognition Scraping | critical | 2020-01 | Privacy & Surveillance | Clearview AI | confirmed | Clearview AI scraped billions of facial images from social media platforms without consent to build a facial recognition database used by law enforcement agencies worldwide, raising mass surveillance concerns. | Clearview AI, Law enforcement agencies worldwide | General public, Social media users, Individuals misidentified by the system | rights violationpsychological | — | — | Systemic Risk | 2025-01-15 |
| INC-25-0023 | 'Vegetative Electron Microscopy' Nonsense Phrase Contaminates Scientific Literature via AI | medium | 2020-01 | Information Integrity | OpenAI | confirmed | The nonsense phrase 'vegetative electron microscopy' — originating from a 1950s OCR scanning error that merged text across two columns — appeared in at least 22 scientific papers. Investigations by Retraction Watch and researchers Guillaume Cabanac and Cyril Labbé traced its spread through a chain: OCR error → digital databases → a Farsi near-homograph confusion (2017–2019) → AI training data (GPT-3 onward). The phrase now serves as a fingerprint for AI-generated or paper-mill-produced manuscripts, undermining trust in parts of the scholarly record. | Authors and paper mills using AI writing tools for scientific manuscripts | Scientific journals publishing contaminated papers, Researchers relying on the integrity of the scholarly record | reputationaloperational | Springer Nature, Elsevier | — | Harm | 2026-03-13 |
| INC-19-0001 | AI Voice Clone CEO Fraud Against UK Energy Company | high | 2019-03 | Information Integrity | Unknown threat actors | confirmed | Criminals used AI-generated voice cloning to impersonate the CEO of a German parent company, deceiving a UK subsidiary executive into transferring approximately $243,000 to a fraudulent account. | Unknown threat actors | UK energy company, Targeted executive | financial | — | — | Harm | 2025-01-15 |
| INC-18-0002 | Amazon AI Recruiting Tool Gender Bias | high | 2018-10 | Discrimination & Social Harm | Amazon | confirmed | Amazon's internal AI recruiting tool was found to systematically penalize resumes containing references to women, reflecting gender bias learned from historically male-dominated hiring data. | Amazon | Female job applicants, Women in the technology sector | rights violationfinancial | — | — | Harm | 2025-01-15 |
| INC-18-0003 | Boeing 737 MAX MCAS Automation Failures — Two Fatal Crashes | critical | 2018-10 | Human-AI Control | Boeing | confirmed | Boeing's Maneuvering Characteristics Augmentation System (MCAS) contributed to two fatal crashes of 737 MAX aircraft, killing all 346 people aboard. | Lion Air, Ethiopian Airlines | 346 passengers and crew killed, Families of crash victims, Global air travelers | physical | Lion Air, Ethiopian Airlines | — | Harm | 2026-02-15 |
| INC-18-0001 | Uber Autonomous Vehicle Pedestrian Fatality | critical | 2018-03 | Human-AI Control | Uber Advanced Technologies Group (ATG) | confirmed | An Uber autonomous test vehicle struck and killed pedestrian Elaine Herzberg in Tempe, Arizona, marking the first known fatality involving a fully autonomous vehicle and a pedestrian. | Uber | Elaine Herzberg (deceased), Pedestrians in autonomous vehicle testing zones | physical | — | — | Harm | 2025-01-15 |
| INC-17-0001 | Facebook AI Mistranslation of Arabic Post Leads to Wrongful Arrest in Israel | high | 2017-10 | Information Integrity | Facebook (Meta) | confirmed | Facebook's machine translation system mistranslated an Arabic post containing 'good morning' as 'attack them' in Hebrew, leading Israeli police to arrest a Palestinian construction worker. | Facebook (Meta) | Palestinian construction worker, Arabic-speaking Facebook users | rights violationpsychological | — | — | Harm | 2026-02-15 |
| INC-16-0001 | Australia Robodebt Automated Welfare Fraud Detection | critical | 2016-07 | Discrimination & Social Harm | Australian Government (Department of Human Services) | confirmed | The Australian Government's automated income-averaging algorithm incorrectly issued debt notices to hundreds of thousands of welfare recipients, resulting in widespread financial hardship and contributing to documented suicides. | Australian Government (Department of Human Services) | Australian welfare recipients, Disability support pensioners, Low-income individuals | financialpsychologicalphysical | — | — | Harm | 2025-01-15 |
| INC-16-0003 | COMPAS Recidivism Algorithm Racial Bias | critical | 2016-05 | Discrimination & Social Harm | Northpointe (now Equivant) | confirmed | ProPublica's investigation revealed that the COMPAS recidivism prediction algorithm used in U.S. courts produced racially biased risk scores, with Black defendants nearly twice as likely to be falsely flagged as high risk compared to white defendants. | U.S. state and county courts | Black defendants, Minority defendants in the U.S. criminal justice system | rights violationpsychological | — | — | Harm | 2026-02-15 |
| INC-16-0002 | Microsoft Tay Twitter Chatbot Adversarial Manipulation | high | 2016-03 | Agentic Systems | Microsoft | confirmed | Microsoft's Tay chatbot was manipulated by coordinated users on Twitter to produce racist, sexist, and inflammatory statements within hours of its public launch, demonstrating vulnerabilities in unsupervised online learning systems. | Microsoft | General public, Targeted minority groups | reputationalsocietal | Microsoft | — | Harm | 2026-02-15 |
| INC-13-0001 | Dutch Childcare Benefits Algorithm Discrimination | critical | 2013-01 | Discrimination & Social Harm | Dutch Tax Authority (Belastingdienst) | confirmed | The Dutch Tax Authority deployed a self-learning algorithm that disproportionately flagged families with dual nationalities for childcare benefit fraud, leading to wrongful debt claims against over 26,000 families. | Dutch Tax Authority (Belastingdienst) | Over 26,000 Dutch families, Families with dual nationalities, Low-income caregivers | financialpsychologicalrights violation | — | — | Harm | 2026-02-15 |
| INC-10-0001 | 2010 Flash Crash — Algorithmic Trading Cascading Failure | critical | 2010-05 | Systemic Risk | Waddell & Reed Financial, Multiple high-frequency trading firms | confirmed | Algorithmic trading systems triggered a cascading failure that briefly erased nearly $1 trillion in U.S. equity market value within minutes before a partial recovery. | Waddell & Reed Financial, Multiple high-frequency trading firms | U.S. equity investors, Retail traders, Market participants | financial | — | — | Harm | 2026-02-15 |