Skip to main content
TopAIThreats home TOP AI THREATS
Harm Mechanism

Non-Consensual Intimate Imagery

Sexually explicit images or videos created or distributed without the depicted person's consent, increasingly generated using AI deepfake tools.

Definition

Non-consensual intimate imagery (NCII) refers to sexually explicit images or videos that are created, altered, or distributed without the depicted individual’s consent. Historically associated with the distribution of real photographs, NCII has expanded to include AI-generated synthetic content where a person’s likeness is superimposed onto explicit material using deepfake tools. These AI-generated images can be produced from publicly available photographs, requiring no access to genuine intimate material. The term encompasses both the creation and distribution of such content, and applies regardless of whether the imagery depicts real or fabricated scenarios.

How It Relates to AI Threats

NCII is a primary harm mechanism within both Information Integrity and Discrimination & Social Harm. Within information integrity, it represents a form of synthetic media manipulation where generative AI is used to produce fraudulent content depicting real individuals. Within discrimination and social harm, NCII disproportionately targets women and girls, functioning as a tool of harassment, coercion, and reputational destruction. The accessibility of AI generation tools has dramatically lowered the technical barrier to producing this material.

Why It Occurs

  • Generative AI tools capable of producing realistic synthetic imagery have become widely accessible
  • Open-source models and consumer applications allow non-technical users to create explicit deepfakes
  • Existing content moderation systems struggle to detect and remove AI-generated material at scale
  • Legal frameworks in many jurisdictions have not been updated to address AI-generated NCII specifically
  • Social media platforms provide rapid distribution channels that outpace takedown mechanisms

Real-World Context

The Westfield High School incident (INC-23-0008) demonstrated how AI-generated NCII was used to target students, with deepfake nude images created from ordinary social media photographs and distributed among peers. This case prompted legislative action in multiple U.S. states to criminalise AI-generated NCII. The UK Online Safety Act 2023 and the proposed U.S. DEFIANCE Act represent regulatory responses to this growing threat category. Research estimates that the volume of AI-generated NCII has increased substantially year over year, with the majority of targets being women and girls who had no prior involvement with explicit content.

Last updated: 2026-02-14