Synthetic Identity
A fabricated identity constructed by combining real and fictitious personal information — such as genuine Social Security numbers with fake names and addresses — or by using AI-generated biometric data (face images, voice prints) to create a persona that does not correspond to any real individual but can pass identity verification systems.
Definition
A synthetic identity is a fabricated persona created by blending real personal data (often stolen) with fictitious information, or by using AI-generated biometric data, to create an identity that can pass automated and sometimes manual verification checks. Unlike traditional identity theft, where an attacker assumes a real person’s complete identity, synthetic identity fraud creates a new identity that does not correspond to any existing individual, making it harder to detect because there is no single victim reporting fraudulent activity. AI capabilities — deepfake face generation, voice cloning, document fabrication — have dramatically enhanced the sophistication and scalability of synthetic identity creation. A synthetic identity can have a computer-generated face, a cloned voice, fabricated identity documents, and AI-written social media history.
How It Relates to AI Threats
Synthetic identities operate at the intersection of the Information Integrity Threats, Security and Cyber Threats, and Privacy and Surveillance Threats domains. AI-generated faces (via GANs or diffusion models) can create photo-realistic identity photographs for people who do not exist. Voice cloning technology enables synthetic identities to pass voice-based verification systems. LLMs can generate plausible social media histories, business correspondence, and biographical details. The combination of these AI capabilities allows creation of synthetic identities at scale that can bypass traditional and even advanced identity verification systems, enabling fraud, money laundering, and evasion of sanctions.
Why It Occurs
- AI image generation creates photo-realistic faces for non-existent people at no cost
- Voice cloning requires only seconds of sample audio to create convincing voice prints
- Automated systems for identity verification (KYC, onboarding) are not designed to detect AI-generated biometric data
- Data breaches provide real personal data components (SSNs, addresses) that can anchor synthetic identities
- The economic incentive for financial fraud — credit card applications, loan fraud, money laundering — drives sophisticated synthetic identity operations
Real-World Context
The US Federal Reserve has identified synthetic identity fraud as the fastest-growing type of financial crime, with estimated annual losses exceeding $6 billion. Documented cases include synthetic identities used to open bank accounts, obtain credit cards, apply for loans, and launder money. AI-generated profile photographs have been identified in fake LinkedIn profiles used for espionage and social engineering campaigns. Financial institutions are developing AI-based countermeasures including biometric liveness detection and cross-database identity verification, but the arms race between synthetic identity creation and detection continues to escalate.
Related Threat Patterns
Related Terms
Last updated: 2026-04-03