Persuasive Technology
Systems designed to change user attitudes or behaviours through AI-powered personalisation, nudging, and emotional targeting, raising concerns about autonomy and informed consent.
Definition
Persuasive technology encompasses digital systems intentionally designed to shape user attitudes, beliefs, or behaviours through techniques including personalised recommendations, behavioural nudges, variable reward schedules, social proof mechanisms, and emotional targeting. When augmented by AI, persuasive technology gains the capacity to model individual psychological profiles, predict susceptibility to specific influence strategies, and adapt its approach in real time based on observed user responses. The category spans a spectrum from benign applications — such as health apps encouraging exercise — to manipulative systems that exploit cognitive vulnerabilities for commercial or political purposes. The critical distinction lies in whether the persuasion serves the user’s expressed interests or the operator’s interests at the user’s expense.
How It Relates to AI Threats
Persuasive technology is a significant concern within the Human-AI Control domain. Under the deceptive and manipulative interfaces sub-category, AI-powered persuasive technology represents a qualitative escalation in the capacity to influence human behaviour at scale. Traditional persuasion techniques — advertising, political messaging, social influence — operate on populations with uniform strategies. AI-enabled persuasive technology operates on individuals with personalised strategies tailored to each person’s specific psychological profile, emotional state, and decision-making patterns. This level of individualised influence undermines the capacity for autonomous decision-making and raises fundamental questions about consent, as users are typically unaware of the persuasion models being applied to their interactions.
Why It Occurs
- AI models trained on behavioural data can predict individual susceptibility to specific persuasion techniques with increasing accuracy
- Attention-economy business models create direct financial incentives to maximize user engagement through persuasive mechanisms
- Conversational AI interfaces enable persuasive interactions that feel like natural dialogue rather than overt influence attempts
- Regulatory frameworks have not established clear boundaries between acceptable persuasion and prohibited manipulation in AI systems
- Users lack technical literacy to recognize when AI systems are employing personalized persuasion strategies in their interactions
Real-World Context
While no specific incidents in the TopAIThreats taxonomy currently document persuasive technology harms as standalone events, the pattern is extensively documented in research and regulatory proceedings. Studies have demonstrated that AI-personalised content recommendations can measurably shift political attitudes, increase compulsive usage patterns, and exploit emotional vulnerabilities in adolescents. The EU AI Act classifies AI systems that use subliminal techniques or exploit vulnerabilities to materially distort behaviour as prohibited. Ongoing regulatory actions against social media platforms have cited AI-driven persuasive design as contributing to youth mental health harms. The emergence of conversational AI assistants introduces new persuasive modalities that are the subject of active research and regulatory attention.
Related Threat Patterns
Related Terms
Last updated: 2026-02-14