Skip to main content
TopAIThreats home TOP AI THREATS

AI Threats Affecting the General Public

How AI-enabled threats affect the broad population of individuals as end users, consumers, or members of the public — when harm is not confined to a specific professional or demographic group.

individuals

How AI Threats Appear

For the general public, AI-enabled threats most commonly surface through:

  • Synthetic media and misinformation — AI-generated text, images, audio, or video that distorts public understanding, erodes trust, or enables fraud
  • Social engineering at scale — Personalized phishing, scam messages, or impersonation attacks powered by language models
  • Manipulative interfaces — AI-driven recommendation systems, chatbots, or digital assistants that shape behavior through engagement optimization or dark patterns
  • Privacy erosion — Behavioral profiling, facial recognition, and inference of sensitive attributes from everyday digital activity
  • Unreliable AI advice — Chatbots and AI assistants providing inaccurate medical, legal, or financial information that users act upon

These risks are documented through real-world incidents rather than hypothetical scenarios.


Relevant AI Threat Domains


What to Watch For

Indicators of exposure to AI-enabled threats in everyday contexts:

  • Content that seems designed to provoke emotional reactions or urgency
  • Communications from contacts that seem inconsistent with their usual behavior
  • Services that require disproportionate personal data relative to their function
  • AI-generated recommendations that consistently push toward specific commercial outcomes
  • Difficulty distinguishing AI-generated content from human-created content

Regulatory Context

Several governance frameworks address protection of the general public from AI-enabled harm:

  • EU AI Act — Classifies high-risk AI systems that affect fundamental rights, with transparency requirements for AI-generated content
  • NIST AI RMF — Provides risk management guidance for AI systems interacting with the public
  • Consumer protection agencies in multiple jurisdictions are developing AI-specific guidance for deceptive practices

For classification rules and evidence standards, refer to the Methodology.

Last updated: 2026-03-03 · Back to Affected Groups