Skip to main content
TopAIThreats home TOP AI THREATS
Governance Concept

Human Agency

The capacity of individuals to make autonomous, informed decisions and exercise meaningful control over actions that affect their lives, increasingly at risk as AI systems assume decision-making authority.

Definition

Human agency refers to the capacity of individuals to act intentionally, make informed choices, and exercise meaningful control over decisions that affect their circumstances. In the context of AI governance, human agency encompasses both the ability to understand when and how automated systems influence outcomes and the practical power to override, contest, or opt out of such systems. The concept draws from philosophical traditions concerning autonomy and self-determination, but acquires specific technical and legal dimensions when AI systems mediate or replace human judgment. Preserving human agency requires not only that humans remain nominally “in the loop” but that they possess sufficient understanding, authority, and practical means to exercise genuine oversight.

How It Relates to AI Threats

Human agency is a foundational concern within the Human-AI Control and Agentic & Autonomous AI domains. Loss of human agency occurs when AI systems gradually assume decision-making functions without explicit delegation, a process termed implicit authority transfer. This is compounded by automation bias, where humans defer to AI recommendations even when their own judgment would produce better outcomes. In agentic AI contexts, autonomous systems may act on behalf of users without meaningful human review, further eroding the boundary between human decision and machine action. The degradation of human agency is often incremental and difficult to detect, making it a particularly insidious threat pattern.

Why It Occurs

  • AI systems are designed to minimise friction, which often means reducing opportunities for human deliberation
  • Automation bias leads humans to accept AI outputs uncritically, particularly under time pressure
  • Complex AI interfaces obscure when and how automated decisions are being made on behalf of users
  • Institutional incentives favour efficiency gains from automation over preservation of human oversight
  • Gradual normalisation of AI-mediated decisions erodes awareness that agency is being transferred

Real-World Context

The erosion of human agency is documented in incidents such as INC-18-0001, where automated decision-making systems operated with insufficient human oversight. The EU AI Act explicitly enshrines human oversight requirements for high-risk AI systems, mandating that individuals retain the ability to understand, supervise, and override automated decisions. The OECD AI Principles similarly identify human agency as a core value. Professional bodies in medicine, law, and finance have begun developing guidance on maintaining practitioner autonomy in the presence of AI advisory tools.

Last updated: 2026-02-14