Skip to main content
TopAIThreats home TOP AI THREATS
Harm Mechanism

Dark Pattern

A deceptive user interface design that manipulates individuals into making decisions they would not otherwise make, increasingly amplified by AI-driven personalisation.

Definition

A dark pattern is a user interface or user experience design element deliberately crafted to trick, coerce, or nudge users into actions that benefit the system operator at the user’s expense. Originally coined by UX researcher Harry Brignull in 2010, dark patterns encompass techniques such as hidden costs, forced continuity, misdirection, and confirmshaming. The integration of AI amplifies these practices significantly: machine learning models can personalise manipulative interfaces in real time, adapting deceptive elements to individual psychological profiles, behavioural histories, and vulnerability indicators. AI-powered dark patterns are harder to detect because they may present differently to each user, complicating both regulatory enforcement and empirical research into their prevalence and impact.

How It Relates to AI Threats

Dark patterns are a primary mechanism within the Human-AI Control domain, specifically under deceptive and manipulative interface threats. When AI systems personalise interface elements to exploit individual cognitive biases, they erode informed consent and human agency. This connects directly to implicit authority transfer, where users unknowingly cede decision-making to systems designed to override their preferences. AI-driven dark patterns also intersect with privacy threats when manipulative consent interfaces trick users into sharing more data than intended. The scalability of AI personalisation means that dark patterns can operate at population scale with individualised precision, representing a qualitative escalation over traditional static deceptive designs.

Why It Occurs

  • AI personalisation enables real-time adaptation of manipulative interface elements to individual users
  • Platform business models create financial incentives to maximise engagement and data collection
  • Regulatory frameworks lag behind the technical sophistication of AI-driven interface manipulation
  • Users lack visibility into how interfaces are personalised and cannot compare experiences
  • Competitive pressure drives adoption of engagement-maximising designs regardless of ethical cost

Real-World Context

While no specific incidents in the TopAIThreats taxonomy currently document AI-driven dark patterns, regulatory attention is intensifying. The EU Digital Services Act and the proposed AI Act both address manipulative AI interfaces, with the latter explicitly prohibiting AI systems that deploy subliminal techniques to distort behaviour. The U.S. Federal Trade Commission has pursued enforcement actions against deceptive design practices. Industry research from Princeton and Carnegie Mellon has catalogued thousands of dark pattern instances across major platforms, establishing an empirical foundation for ongoing regulatory and technical countermeasures.

Last updated: 2026-02-14