Skip to main content
TopAIThreats home TOP AI THREATS
Harm Mechanism

Misinformation

False or inaccurate information spread without deliberate intent to deceive, distinct from disinformation which involves intentional deception. AI-generated hallucinations represent a major and growing source.

Definition

Misinformation refers to false, inaccurate, or misleading information that is created or disseminated without the deliberate intent to deceive. This distinguishes it from disinformation, which involves the purposeful creation and spread of falsehoods. In the context of AI, misinformation acquires new dimensions because large language models and generative AI systems can produce plausible but factually incorrect content — known as hallucinations — at unprecedented scale and speed. Users who trust AI-generated outputs may unknowingly propagate false claims, creating cascading misinformation effects. The authoritative presentation style of AI outputs compounds the problem, as recipients may assign unwarranted credibility to machine-generated text that lacks factual grounding.

How It Relates to AI Threats

Misinformation is a foundational threat within the Information Integrity domain, particularly under the misinformation and hallucinated content sub-category. AI systems contribute to misinformation through two primary pathways: direct generation of false content through hallucination, and amplification of existing false content through recommendation algorithms and content curation systems. When AI-generated misinformation reaches sufficient volume and velocity, it contributes to consensus reality erosion — the degradation of shared factual foundations necessary for democratic discourse and informed decision-making. The difficulty of distinguishing AI-generated misinformation from reliable information further undermines public trust in information ecosystems.

Why It Occurs

  • Large language models generate statistically plausible text without verifying factual accuracy of outputs
  • AI systems lack grounded understanding of truth and cannot distinguish fact from fabrication
  • Recommendation algorithms optimise for engagement, which can preferentially amplify sensational false claims
  • The volume and speed of AI content generation outpaces human capacity for fact-checking and verification
  • Users frequently trust AI outputs without independent verification, propagating errors through social networks

Real-World Context

AI-generated misinformation is documented in incidents including INC-23-0005, where AI-produced false content entered public discourse. Research has shown that large language models hallucinate at measurable rates across all major benchmarks, with error rates varying by domain and query complexity. The EU Digital Services Act requires platforms to address systemic risks from misinformation, while the EU AI Act imposes transparency obligations on AI-generated content. Fact-checking organisations and media literacy initiatives have identified AI-generated misinformation as a priority challenge, prompting development of detection tools and provenance standards such as the C2PA content authenticity framework.

Last updated: 2026-02-14