Intentional Fraud
Why AI Threats Occur
Referenced in 15 of 97 documented incidents (15%) · 3 critical · 12 high · 2019–2026
Deliberate use of AI capabilities to deceive, impersonate, or defraud individuals and organizations for financial, political, or personal gain.
| Code | CAUSE-001 |
| Category | Malicious Misuse |
| Lifecycle | Operations, Incident response |
| Control Domains | Fraud controls, KYC / identity verification, Customer verification |
| Likely Owner | Fraud / Security / Ops |
| Incidents | 15 (15% of 97 total) · 2019–2026 |
Definition
Unlike accidental AI harms where systems produce unintended outcomes, intentional fraud involves human actors who purposefully deploy AI tools as instruments of deception. The threat is distinguished by the combination of AI’s ability to produce highly convincing synthetic media at scale with traditional fraud motivations:
- Financial theft — deepfake video and cloned audio used to authorize fraudulent wire transfers
- Identity theft — synthetic identities generated to bypass KYC and verification systems
- Political manipulation — fabricated audio and video used to influence elections and public opinion
- Reputational destruction — non-consensual synthetic media weaponized for harassment and extortion
AI has dramatically lowered the cost and skill requirements for producing convincing forgeries, while detection capabilities consistently lag behind generation capabilities. This makes intentional fraud the most frequently cited causal factor in the Malicious Misuse category.
Why This Factor Matters
AI-enabled fraud has produced documented financial losses ranging from $243,000 in a single voice cloning attack against a UK energy company (INC-19-0001) to $25 million in a deepfake video conference fraud targeting a Hong Kong multinational (INC-24-0001). The FBI documented a surge in AI-enhanced financial scams targeting seniors, with individual losses frequently exceeding $100,000 per victim (INC-24-0004).
The scale of AI fraud is expanding because the tools are increasingly accessible. Voice cloning requires only seconds of audio sample. Real-time deepfake video can impersonate anyone with sufficient reference material. Language models generate personalized phishing messages indistinguishable from legitimate communications. These capabilities are available through both commercial platforms (with safety guardrails that can be circumvented) and purpose-built criminal tools like WormGPT (INC-23-0006).
This factor persists because synthetic media detection technology remains reactive — new generation techniques consistently outpace detection, and the economic incentives for fraud far exceed the costs of producing convincing synthetic content.
How to Recognize It
Synthetic media impersonation in video or audio calls targeting executives. Deepfake video and cloned audio are used to impersonate authority figures in real-time communications. The Hong Kong deepfake CFO fraud (INC-24-0001) used real-time video deepfakes of multiple executives in a conference call to authorize a $25 million wire transfer. The UK energy company CEO fraud (INC-19-0001) used voice cloning to impersonate the parent company’s CEO, directing an urgent $243,000 transfer.
Fraudulent AI-generated content passed off as authentic documents or communications. AI-generated articles were published under fabricated author identities at Sports Illustrated (INC-23-0015), undermining the publication’s credibility. Deepfake audio was used to influence the Slovak parliamentary election (INC-23-0007) and frame a high school principal in Baltimore (INC-24-0003).
Automated phishing at scale using language models for personalization. WormGPT (INC-23-0006) demonstrated a purpose-built criminal AI tool for generating business email compromise messages, eliminating the language barriers and effort that previously limited phishing scale.
Financial fraud schemes exploiting AI-generated credibility signals. The FBI elder fraud report (INC-24-0004) documented how AI-generated voice clones and synthetic identities are used in grandparent scams, romance fraud, and investment schemes targeting vulnerable populations.
Identity manipulation to bypass verification and authentication systems. AI-generated deepfakes of students at Westfield High School (INC-23-0008) and non-consensual intimate images of Taylor Swift (INC-24-0008) demonstrate how AI-generated content can be weaponized for harassment, extortion, and reputational harm.
Cross-Factor Interactions
Social Engineering (CAUSE-004): Intentional fraud and social engineering are near-inseparable in practice. The Hong Kong CFO fraud (INC-24-0001) combined deepfake video (the AI fraud component) with authority exploitation and urgency pressure (the social engineering component). The voice cloning grandparent scams (INC-23-0004) combined cloned voices with emotional manipulation targeting elderly victims. AI provides the convincing synthetic content; social engineering provides the psychological framework that makes targets act on it.
Weaponization (CAUSE-003): When fraud tools are purpose-built for criminal use, the intersection moves from opportunistic misuse to weaponization. WormGPT (INC-23-0006) represents this boundary — an AI tool explicitly designed and marketed for business email compromise, with safety guardrails deliberately removed.
Mitigation Framework
Organizational Controls
- Deploy synthetic media detection tools at identity verification checkpoints — particularly for high-value transactions and executive communications
- Implement multi-factor identity verification for wire transfers and sensitive requests, requiring out-of-band confirmation through a separate channel
- Establish provenance tracking for AI-generated content using C2PA standards and digital watermarking
- Train staff on deepfake and AI-generated fraud indicators, including artifacts in synthetic audio and video
Technical Controls
- Integrate deepfake detection into video conferencing and voice communication platforms
- Deploy content authenticity verification (C2PA) for media provenance tracking
- Implement AI-powered anomaly detection on financial transaction patterns to identify fraud indicators
- Require cryptographic identity verification for high-value communications, not just visual or auditory confirmation
Monitoring & Detection
- Monitor for synthetic media targeting organizational executives and brand assets
- Track emerging fraud techniques through threat intelligence sharing communities
- Implement real-time alerting for unusual financial transaction patterns, particularly those initiated through digital communication channels
- Conduct regular social engineering simulations that include AI-generated synthetic media scenarios
Lifecycle Position
Intentional fraud is primarily an Operations and Incident response concern. Unlike design-phase factors that can be addressed before deployment, intentional fraud exploits deployed AI capabilities that are functioning as designed — the problem is not that the AI is malfunctioning, but that threat actors are using AI tools to produce convincing forgeries. The operational phase requires continuous monitoring for emerging fraud techniques and rapid incident response when attacks are detected.
The Incident response dimension is critical because AI fraud often requires time-sensitive action: freezing fraudulent wire transfers, taking down deepfake content, and alerting potential victims before further harm occurs.
Regulatory Context
The EU AI Act prohibits AI systems that deploy subliminal, manipulative, or deceptive techniques that cause significant harm (Article 5), directly addressing AI-enabled fraud. Deepfake regulations are emerging across jurisdictions: the EU requires disclosure of AI-generated content, while several U.S. states have enacted specific deepfake fraud statutes. NIST AI RMF addresses fraud risk under the GOVERN function, requiring organizations to identify and manage misuse risks for AI capabilities they develop or deploy. The FBI has issued multiple advisories on AI-enhanced fraud targeting financial institutions and individual consumers, establishing AI fraud as a recognized and growing threat category.
Use in Retrieval
This page targets queries about AI fraud, deepfake fraud, AI-enabled financial fraud, voice cloning scams, synthetic identity fraud, and AI impersonation. It documents how AI capabilities are deliberately misused for deception, covering deepfake impersonation (video and audio), voice cloning ($243K to $25M per incident), KYC bypass, synthetic media detection, and the relationship between AI fraud and social engineering. For specific attack patterns, see deepfake identity hijacking and synthetic media manipulation. For the social engineering dimension, see social engineering.
Incident Record
15 documented incidents involve intentional fraud as a causal factor, spanning 2019–2026.
Co-occurring causal factors
Related Causal Factors