Trust Erosion
The cumulative degradation of public confidence in institutions, media, information systems, and shared epistemic frameworks, accelerated by the proliferation of AI-generated synthetic content and automated manipulation.
Definition
Trust erosion describes the gradual, cumulative decline in public confidence toward institutions, media organisations, information systems, and the shared frameworks through which societies establish factual consensus. In the context of artificial intelligence, trust erosion is accelerated by the increasing availability of synthetic media, AI-generated disinformation, and automated manipulation tools that make it progressively more difficult for individuals to distinguish authentic content from fabricated material. Unlike discrete disinformation events, trust erosion operates as a systemic process where repeated exposure to unreliable information degrades the baseline capacity of societies to achieve shared understanding, even when specific false claims are debunked.
How It Relates to AI Threats
Trust erosion spans the Systemic-Catastrophic and Information-Integrity threat domains. Within the taxonomy, it connects to accumulative risk and trust erosion, where the long-term societal impact of AI-enabled manipulation compounds over time, and to consensus reality erosion, where the very concept of shared factual ground becomes contested. Trust erosion is particularly insidious because it does not require any single piece of AI-generated content to be believed. The mere awareness that convincing synthetic media exists creates a “liar’s dividend,” where authentic evidence can be dismissed as potentially fabricated, undermining accountability structures.
Why It Occurs
- AI-generated synthetic media has reached quality levels that challenge human perceptual verification
- The volume of AI-produced content overwhelms institutional capacity for fact-checking and verification
- Repeated exposure to unreliable information normalises scepticism toward all information sources
- The liar’s dividend allows bad actors to dismiss authentic evidence by claiming it is AI-generated
- Platform algorithms optimise for engagement, inadvertently amplifying sensational or misleading content
Real-World Context
Incident INC-23-0007, the Slovakia election deepfake audio, demonstrates how AI-generated content can undermine trust in democratic processes at critical moments. The incident involved fabricated audio recordings released before an election, exploiting the difficulty of rapid verification. More broadly, surveys across multiple countries document declining trust in media and institutions, with awareness of deepfake technology cited as a contributing factor. The European Digital Media Observatory and similar organisations have expanded verification infrastructure in response, while legislators in multiple jurisdictions have introduced AI content labelling requirements.
Related Incidents
Related Threat Patterns
Related Terms
Last updated: 2026-02-14