INC-25-0040 confirmed critical IWF Reports AI-Generated CSAM Videos Increase 26,385% with 65% at Highest Severity (2025)
Various AI companies developed and Various deployed Various AI image and video generation systems, harming Children depicted in AI-generated CSAM and Child protection organizations overwhelmed by volume ; possible contributing factors include misconfigured deployment, insufficient safety testing, and regulatory gap.
Incident Details
| Date Occurred | 2025 |
| Severity | critical |
| Evidence Level | primary |
| Impact Level | Global |
| Domain | Discrimination & Social Harm |
| Primary Pattern | PAT-SOC-005 Representational Harm |
| Regions | global |
| Sectors | Technology, Law Enforcement |
| Affected Groups | Children, General Public |
| Exposure Pathways | Direct Interaction |
| Causal Factors | Misconfigured Deployment, Insufficient Safety Testing, Regulatory Gap |
| Assets & Technologies | Generative Image Models |
| Entities | Various AI companies(developer), ·Various(deployer) |
| Harm Types | psychological, rights violation, societal |
The Internet Watch Foundation reported 8,029 AI-generated CSAM images and videos in 2025, with AI-generated CSAM videos increasing from 13 in 2024 to 3,443 in 2025 — a 26,385% increase. 65% of AI-generated CSAM was classified as Category A (most severe). NCMEC received over 1 million CSAM reports in 9 months.
Incident Summary
The Internet Watch Foundation (IWF) published its annual report in March 2026 documenting an explosive increase in AI-generated child sexual abuse material. The report identified 8,029 AI-generated CSAM images and videos in 2025, with AI-generated CSAM videos increasing from 13 in 2024 to 3,443 in 2025 — a 26,385% year-over-year increase.[1][2] Sixty-five percent of the AI-generated CSAM was classified as Category A — the most severe classification in the IWF’s taxonomy, indicating penetrative sexual activity involving children.[4] The National Center for Missing and Exploited Children (NCMEC) reported receiving over 1 million CSAM reports in just nine months, with AI-generated content representing a growing proportion of the overall volume.[3] The IWF data establishes that AI-generated CSAM has transitioned from a marginal concern to a dominant trend in online child exploitation, with the shift from images to video representing a qualitative escalation in the realism and severity of the generated material.
Key Facts
- Total AI CSAM: 8,029 AI-generated images and videos identified in 2025[1]
- Video explosion: AI CSAM videos increased from 13 (2024) to 3,443 (2025) — 26,385% increase[2]
- Severity: 65% classified as Category A (most severe)[4]
- NCMEC volume: Over 1 million CSAM reports in 9 months[3]
- Shift to video: The transition from AI-generated images to video represents a qualitative escalation in realism
- Detection challenge: The volume is overwhelming existing child protection infrastructure
Threat Patterns Involved
Primary: Representational Harm — The industrial-scale generation of AI CSAM — with 65% at the highest severity level — constitutes representational harm that exploits the likeness of children for sexual abuse material. The 26,385% increase in video CSAM demonstrates that generative AI has dramatically lowered the barriers to producing the most harmful category of online content.
Significance
- Exponential growth trajectory — The 26,385% year-over-year increase in AI CSAM videos indicates exponential growth that will overwhelm detection and enforcement capabilities if current trends continue
- Qualitative escalation to video — The shift from AI-generated images to video represents a significant escalation in the realism, potential for harm, and difficulty of detection of AI-generated CSAM
- Severity concentration — The 65% Category A classification rate indicates that AI tools are disproportionately being used to generate the most severe forms of CSAM, not merely borderline content
- Infrastructure overwhelm — The 1 million+ NCMEC reports in 9 months demonstrates that AI-generated content is overwhelming the child protection infrastructure designed for human-created material, requiring fundamental reconsideration of detection and response capacity
Timeline
IWF identifies 13 AI-generated CSAM videos
AI-generated CSAM videos increase to 3,443 — a 26,385% increase
Total AI-generated CSAM reaches 8,029 images and videos
NCMEC receives over 1 million CSAM reports in 9 months
IWF publishes annual report documenting the explosion in AI-generated CSAM
Outcomes
- Regulatory Action:
- IWF report informing policy discussions; DEFIANCE Act and state-level legislation
Use in Retrieval
INC-25-0040 documents IWF Reports AI-Generated CSAM Videos Increase 26,385% with 65% at Highest Severity, a critical-severity incident classified under the Discrimination & Social Harm domain and the Representational Harm threat pattern (PAT-SOC-005). It occurred in Global (2025). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "IWF Reports AI-Generated CSAM Videos Increase 26,385% with 65% at Highest Severity," INC-25-0040, last updated 2026-03-29.
Sources
- IWF Annual Report: AI-generated CSAM statistics (research, 2026-03)
https://www.iwf.org.uk (opens in new tab) - AI CSAM videos increase 26,385% (news, 2026-03)
https://www.euronews.com (opens in new tab) - NCMEC receives 1M+ CSAM reports in 9 months (news, 2026-03)
https://www.cbsnews.com (opens in new tab) - 65% of AI CSAM classified Category A (news, 2026-03)
https://www.nbcnews.com (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)