Skip to main content
TopAIThreats home TOP AI THREATS
Failure Mode

Overreliance

Excessive dependence on AI system outputs without adequate independent verification or critical evaluation, leading to unchecked errors and diminished human judgment capacity.

Definition

Overreliance describes a pattern in which human users or organizations place excessive trust in AI system outputs, accepting recommendations, classifications, or generated content without adequate independent verification. Overreliance is distinct from but closely related to automation bias: while automation bias is the cognitive tendency to favour automated outputs over contradictory information, overreliance is the broader behavioural pattern in which verification practices, critical evaluation, and independent judgment are systematically reduced or abandoned in favour of AI-generated answers. Overreliance can develop gradually as users experience AI outputs that are mostly correct, building a default assumption of reliability that persists even when the system operates outside its domain of competence or encounters novel situations.

How It Relates to AI Threats

Overreliance is a central failure mode within the Human-AI Control domain. Under the overreliance and automation bias sub-category, this pattern represents a fundamental challenge to meaningful human oversight of AI systems. When human operators default to accepting AI outputs without critical evaluation, the intended safeguard of human-in-the-loop design is effectively neutralized. The problem is compounded by AI systems that present outputs with uniform confidence regardless of their actual reliability, providing no signal to users about when additional scrutiny is warranted. In high-stakes domains including healthcare, criminal justice, and financial services, overreliance can lead to consequential errors that the human oversight layer was specifically designed to prevent.

Why It Occurs

  • AI systems that are correct most of the time condition users to expect correctness, making exceptions difficult to detect
  • Uniform presentation of outputs regardless of confidence level provides no signal indicating when human scrutiny is most needed
  • Time pressure and workload incentives reward rapid acceptance of AI recommendations over slower independent verification
  • Deskilling occurs as practitioners who rely on AI lose the domain expertise needed to identify AI errors
  • Organizational processes increasingly treat AI outputs as default decisions rather than inputs to human judgment

Real-World Context

While no specific incidents in the TopAIThreats taxonomy are currently classified under overreliance alone, the pattern is implicated in multiple documented cases. INC-24-0001, the Hong Kong deepfake CFO fraud, illustrates how the realistic quality of AI-generated content can override normal verification procedures. More broadly, documented cases of lawyers submitting AI-hallucinated legal citations to courts, healthcare providers following incorrect AI diagnostic recommendations, and financial analysts acting on AI-generated market summaries without verification all demonstrate the real-world consequences of overreliance. Regulatory frameworks including the EU AI Act require human oversight provisions specifically to counter overreliance in high-risk AI applications.

Last updated: 2026-02-14