Skip to main content
TopAIThreats home TOP AI THREATS
CAUSE-004 Malicious Misuse

Social Engineering

Why AI Threats Occur

Referenced in 10 of 97 documented incidents (10%) · 2 critical · 7 high · 1 medium · 2019–2026

Use of AI to craft, personalize, or scale social engineering attacks that exploit human trust, authority, or emotional responses.

Code CAUSE-004
Category Malicious Misuse
Lifecycle Operations, Incident response
Control Domains Employee training, Communication security, Verification procedures
Likely Owner Security / Ops
Incidents 10 (10% of 97 total) · 2019–2026

Definition

AI has transformed social engineering from a manually intensive craft into a scalable, automated capability. Three generative AI modalities drive this transformation:

  • Language models — generate personalized phishing messages indistinguishable from legitimate communications, eliminating grammatical tells that previously helped recipients identify fraud
  • Voice cloning — produces convincing audio impersonations from seconds of sample material, enabling real-time phone-based deception of targets who recognize the cloned voice
  • Real-time deepfake video — enables face-to-face deception in video conferences, as demonstrated in the $25 million Hong Kong CFO fraud

This factor frequently co-occurs with intentional fraud (CAUSE-001), confirming that AI-enhanced social engineering typically serves as the delivery mechanism for broader deception campaigns rather than operating in isolation.

Why This Factor Matters

AI-enhanced social engineering has produced documented financial losses in the tens of millions of dollars and has been used to manipulate democratic elections. The Hong Kong deepfake CFO fraud (INC-24-0001) combined voice cloning and video deepfakes of multiple executives to socially engineer a $25 million wire transfer — the targets believed they were on a legitimate video call with colleagues they recognized. Voice cloning scams targeting seniors (INC-23-0004) exploit the emotional bond between grandparents and grandchildren, with cloned voices triggering immediate protective responses that bypass rational assessment.

The escalation is driven by two factors: personalization and scale. Language models can generate thousands of individually tailored phishing messages, each referencing specific details about the target. Voice synthesis requires only brief audio samples — often available from social media — to produce convincing impersonations. The FBI’s elder fraud report (INC-24-0004) documented that AI-enhanced scams are increasingly difficult for victims to distinguish from genuine communications.

This factor persists because social engineering exploits human psychology, not technology — and AI makes the deception more convincing while removing the effort and skill barriers that previously limited attack scale.

How to Recognize It

Voice or video impersonation of authority figures using generative AI. The UK energy CEO fraud (INC-19-0001) used voice cloning to impersonate the parent company’s CEO, directing an urgent wire transfer. The Hong Kong CFO fraud (INC-24-0001) used real-time deepfake video of multiple executives in a conference call. Both attacks exploited the victim’s recognition of and deference to authority figures.

Personalized phishing at scale with language model-generated messages. AI-generated phishing messages eliminate the grammatical errors and generic content that previously helped recipients identify fraudulent communications. OpenAI reported disrupting campaigns that used its models for personalized social engineering at scale (INC-26-0001).

Trust exploitation in AI-mediated communications and chatbots. Deepfake audio was used to influence the Slovak parliamentary election (INC-23-0007) and to fabricate incriminating audio of a Baltimore school principal (INC-24-0002), exploiting public trust in audio recordings as evidence.

Emotional manipulation through synthetic content targeting vulnerable individuals. Grandparent scams (INC-23-0004) and elder fraud (INC-24-0004) use cloned voices of family members to trigger fear and urgency responses, bypassing the rational assessment that might detect text-based fraud.

Automated victim profiling for precision-targeted social attacks. AI tools can analyze publicly available information to construct detailed profiles of potential targets, identifying vulnerability indicators (recent bereavement, financial stress, job transitions) that inform attack personalization.

Cross-Factor Interactions

Intentional Fraud (CAUSE-001): Social engineering and intentional fraud are near-inseparable in practice. Every social engineering incident in the database also involves intentional fraud. The distinction is methodological: social engineering is the “how” (exploiting human trust through AI-generated deception), while fraud is the “what” (the financial, political, or personal gain). The Hong Kong CFO fraud (INC-24-0001) demonstrates the full fusion: deepfake impersonation (AI capability) + authority exploitation (social engineering) + wire transfer theft (fraud).

Weaponization (CAUSE-003): When social engineering tools are purpose-built for criminal markets, the intersection becomes weaponization. AI-enhanced social engineering often serves as the delivery mechanism for broader fraud schemes, and when those schemes are organized and industrialized, the boundary blurs into criminal infrastructure.

Mitigation Framework

Organizational Controls

  • Implement detection systems for synthetic voice and video in real-time communications, particularly for executive-level calls involving financial decisions
  • Establish out-of-band verification procedures for all sensitive requests — require confirmation through a separate, pre-established channel before acting on voice or video instructions
  • Conduct regular social engineering awareness training with AI-specific scenarios, including deepfake voice and video demonstrations

Technical Controls

  • Deploy communication filtering that detects AI-generated content patterns in email, messaging, and voice communications
  • Integrate deepfake detection into video conferencing platforms and voice communication systems
  • Implement call-back verification protocols for financial transactions, using pre-registered phone numbers rather than numbers provided in the communication
  • Deploy multi-factor identity verification that does not rely solely on voice or visual recognition

Monitoring & Detection

  • Monitor for social engineering campaigns targeting organizational personnel, particularly executives and finance staff
  • Track emerging deepfake and voice cloning capabilities through threat intelligence feeds
  • Implement reporting mechanisms for suspected AI-enhanced social engineering attempts, with rapid escalation pathways
  • Conduct regular simulated social engineering exercises using AI-generated synthetic media to test organizational resilience

Lifecycle Position

Social engineering operates in the Operations and Incident response phases. Like intentional fraud, AI-enhanced social engineering exploits deployed capabilities that are functioning as designed — the threat is not system malfunction but deliberate human exploitation of AI tools. The operational phase requires continuous vigilance: monitoring for emerging attack techniques, updating detection capabilities, and maintaining staff awareness.

Incident response is critical because social engineering attacks often require immediate action: halting fraudulent transactions, alerting targeted individuals, and preserving forensic evidence of synthetic media used in the attack.

Regulatory Context

The EU AI Act requires disclosure when individuals are interacting with AI-generated content (Article 50), which directly addresses deepfake-based social engineering. Consumer protection regulations across jurisdictions are being updated to address AI-enhanced fraud, with several U.S. states enacting specific statutes on deepfake fraud. The FTC has issued guidance on AI-enhanced consumer fraud, and the FBI has published multiple advisories specifically addressing AI voice cloning and deepfake social engineering tactics. NIST AI RMF addresses social engineering risk under the GOVERN function, requiring organizations to identify social engineering as a misuse vector for AI capabilities.

Use in Retrieval

This page targets queries about AI social engineering, AI phishing, AI-enhanced scams, deepfake social engineering, voice cloning scams, AI vishing, AI smishing, and how AI enables social engineering. It covers voice and video impersonation, personalized phishing at scale, emotional manipulation of vulnerable populations, automated victim profiling, and out-of-band verification as the primary defense. For the fraud outcomes that social engineering enables, see intentional fraud. For the attack pattern, see deepfake identity hijacking.