Skip to main content
TopAIThreats home TOP AI THREATS

AI Threats Affecting Children

How AI-enabled threats affect minors under 18, a group requiring distinct protection due to developmental vulnerability, legal protections, and inability to provide informed consent.

individuals

How AI Threats Appear

For children, AI-enabled threats most commonly surface through:

  • Non-consensual synthetic imagery — AI-generated explicit or exploitative images of minors, including deepfakes created from publicly available photos
  • Manipulative AI interactions — Chatbots, virtual companions, or recommendation algorithms that exploit developmental vulnerabilities through emotional manipulation or addictive design
  • Data collection without meaningful consent — AI systems that profile minors through educational platforms, gaming, or social media, often without parental knowledge
  • Algorithmic content exposure — Recommendation systems that surface age-inappropriate, harmful, or radicalizing content to young users
  • Discriminatory educational AI — Automated grading, behavioral monitoring, or learning assessment systems that disadvantage students based on demographic characteristics

Children are treated as a distinct affected group due to their legal protections, developmental vulnerability, and structural inability to provide informed consent to AI system interactions.


Relevant AI Threat Domains


What to Watch For

Indicators of AI-related harm to children:

  • AI systems interacting with minors without age-appropriate safeguards or parental controls
  • Educational platforms using AI assessment without transparency about criteria or data use
  • Social media or gaming platforms with AI recommendation systems that lack youth-specific protections
  • AI-generated content targeting children that mimics trusted sources or authority figures
  • Collection of biometric or behavioral data from children through school-provided devices

Regulatory Context

  • EU AI Act — Specifically identifies AI systems interacting with children as requiring heightened risk assessment
  • COPPA (US) — Restricts collection of personal information from children under 13, with AI systems subject to these requirements
  • UK Age Appropriate Design Code — Requires AI-powered services likely to be accessed by children to meet specific design standards
  • Multiple jurisdictions are developing AI-specific protections for minors, particularly in educational and social media contexts

For classification rules and evidence standards, refer to the Methodology.

Last updated: 2026-03-03 · Back to Affected Groups