AI Threats Affecting Society at Large
How AI-enabled threats produce diffuse systemic harm to social cohesion, public trust, epistemic integrity, or institutional stability — extending beyond identifiable individuals or organizations.
systemsHow AI Threats Appear
For society at large, AI-enabled threats manifest as systemic, diffuse harms that cannot be attributed to specific individuals or organizations:
- Epistemic degradation — Widespread AI-generated content that erodes the shared capacity to distinguish fact from fabrication, undermining the epistemic foundations of public discourse
- Trust erosion — Cumulative loss of public confidence in institutions, media, and interpersonal communications due to the prevalence of AI-generated synthetic content
- Power concentration — Structural accumulation of economic and informational power by entities controlling advanced AI systems, reducing competitive diversity and democratic accountability
- Social fragmentation — AI-driven recommendation and content generation systems that create incompatible information environments, fragmenting shared reality
- Existential and catastrophic risk — Speculative but evidence-informed concerns about advanced AI systems whose optimization targets diverge from collective human welfare
Society at large is the highest-threshold affected group value. It should only be used when harm genuinely diffuses across society and cannot be meaningfully captured by other group values.
Relevant AI Threat Domains
- Systemic Risk — Infrastructure dependency, strategic misalignment, and uncontrolled capability escalation
- Information Integrity — Large-scale erosion of public trust in information ecosystems
- Economic & Labor — Structural market concentration and economic power asymmetry
- Human-AI Control — Gradual transfer of societal decision-making to AI systems
What to Watch For
Indicators of AI-related societal-level harm:
- Measurable decline in public trust in information sources, institutions, or democratic processes correlated with AI content proliferation
- Market concentration metrics showing AI-driven consolidation across multiple sectors
- Evidence of AI systems influencing public opinion or behavior at population scale
- Increasing difficulty in attributing authorship or verifying authenticity of public communications
- Cascading effects where AI failures in one domain propagate to create systemic instability
Regulatory Context
- EU AI Act — Establishes systemic risk assessment requirements for general-purpose AI models
- NIST AI RMF — Addresses societal-scale AI risks through organizational risk management
- International AI governance initiatives (UN, OECD, G7) address cross-border systemic AI risks
- AI safety research institutions (MIRI, ARC, CAIS) contribute to understanding existential and catastrophic AI risk trajectories
For classification rules and evidence standards, refer to the Methodology.
Last updated: 2026-03-03 · Back to Affected Groups