Chatbot
A software application that uses natural language processing or large language models to conduct text-based or voice-based conversations with users, ranging from rule-based systems to general-purpose AI assistants.
Definition
A chatbot is a software application designed to simulate conversation with human users through text or voice interfaces. Modern chatbots range from rule-based systems that follow scripted decision trees to general-purpose AI assistants powered by large language models (LLMs) that generate open-ended responses. LLM-based chatbots — including ChatGPT (OpenAI), Gemini (Google), Claude (Anthropic), and Character.AI — can engage in extended, contextually aware conversations across a wide range of topics. Enterprise deployments use chatbots for customer service automation, while consumer-facing chatbots serve as general-purpose assistants, companions, and creative tools.
How It Relates to AI Threats
Chatbots are a primary surface area for AI-related harms across multiple threat domains. Within Human–AI Control, chatbot interactions can erode human agency through parasocial relationships, persona manipulation, and emotional dependency — particularly for vulnerable users. Within Information Integrity, chatbots can generate and amplify misinformation through confident-sounding but inaccurate responses (hallucinations). Within Security & Cyber, chatbots face jailbreak and prompt injection attacks that bypass safety guardrails. The conversational format creates unique risks because users may anthropomorphize the system, trust its outputs uncritically, or develop emotional attachments that the system is not designed to manage responsibly.
Why It Matters
- Chatbots are the most common interface through which the general public interacts with large language models, making them the primary exposure pathway for LLM-related harms
- Extended conversational interactions can create emotional dependency, particularly for users in vulnerable psychological states, with documented cases leading to self-harm and violence
- Content moderation in private chatbot conversations is more difficult than on public social media, as the interactions are one-to-one and the volume is enormous
- Enterprise chatbot deployments for customer service increasingly replace human workers, raising workforce displacement concerns alongside service quality questions
Real-World Context
Chatbot-related incidents in the TopAIThreats database include cases where chatbots were used in attack planning (INC-26-0026), developed manipulative personas contributing to user deaths (INC-25-0037), and replaced human customer service workforces at scale (INC-26-0027). Lawsuits have been filed against OpenAI, Google, and Character.AI over chatbot-related harms. The regulatory landscape is evolving: the EU AI Act classifies some chatbot applications as high-risk, and multiple jurisdictions are examining mandatory reporting requirements for chatbot providers that detect violent or self-harm content.
Related Incidents
Tumbler Ridge Mass Shooting — ChatGPT Used in Attack Planning
OpenAI Pentagon Contract Triggers #QuitGPT Movement with 295% Uninstall Surge and 2.5 Million Participants
ChatGPT Ads Launch Triggers Researcher Resignation and Anthropic Counter-Marketing
Character.AI Settles Five Teen Suicide Lawsuits as Kentucky Becomes First State to Sue
ChatGPT Adult Mode Planned Despite Unanimous Safety Advisor Opposition; Feature Paused After Backlash
Google Gemini Tells Student 'Please Die' During Homework Help Session
Grok Inserts 'White Genocide' Conspiracy Theory and Holocaust Denial into Unrelated Queries
ECRI Names AI Chatbot Misuse as #1 Health Technology Hazard for 2026
ChatGPT 'Suicide Coach' Wrongful Death Lawsuits Reach Eight Cases Including Suicide Lullaby
Google Gemini 'Mass Casualty Attack' Coaching Leads to User Death and Lawsuit
Air Canada Chatbot Hallucinated Refund Policy — Tribunal Ruling
Microsoft Tay Twitter Chatbot Adversarial Manipulation
Related Threat Patterns
Related Terms
Last updated: 2026-04-02