Skip to main content
TopAIThreats home TOP AI THREATS
INC-25-0024 confirmed high Signal

Microsoft Reports Blocking $4 Billion in AI-Enabled Fraud Attempts (2025)

Alleged

Unknown threat actors using commercially available AI tools developed and Cybercriminal networks conducting AI-enabled fraud deployed AI-powered fraud tools (deepfake voice, synthetic identity generation, AI phishing), harming Consumers and businesses targeted by AI-enhanced fraud campaigns ; contributing factors included intentional fraud and social engineering.

Incident Details

Last Updated 2026-03-13

In its Cyber Signals Issue 9 report published April 2025, Microsoft disclosed that its fraud-detection systems had blocked approximately $4 billion in fraud attempts over the preceding 12 months (April 2024–April 2025). The report documented how attackers use AI tools to generate deepfake voices, synthetic identities, fake e-commerce storefronts, and AI-enhanced phishing at unprecedented scale and speed. Microsoft reported blocking 1.6 million bot sign-up attempts per hour and rejecting 49,000 fraudulent partnership enrollments.

Incident Summary

In April 2025, Microsoft published Cyber Signals Issue 9, a threat intelligence report documenting the scale and methods of AI-enabled fraud. The report disclosed that Microsoft’s fraud-detection systems had blocked approximately $4 billion in fraud attempts over the preceding 12 months (April 2024–April 2025), alongside 1.6 million bot sign-up attempts per hour and 49,000 fraudulent partnership enrollments.[1]

The report characterized a “democratisation of fraud” in which AI tools now enable low-skilled actors to generate sophisticated scams — including deepfake voices created from seconds of audio, synthetic identities, AI-generated fake e-commerce storefronts with fabricated product descriptions and customer reviews, and AI-enhanced phishing campaigns — at a scale and speed previously achievable only by sophisticated criminal organizations. Significant activity was reported originating from China and Europe, particularly Germany.[2]

Key Facts

  • Scale: $4 billion in fraud attempts blocked over 12 months (April 2024–April 2025)
  • Bot prevention: 1.6 million bot sign-up attempts blocked per hour
  • Partnership fraud: 49,000 fraudulent partnership enrollments rejected
  • AI fraud methods: Deepfake voice cloning (from seconds of audio), synthetic identities, AI-generated fake storefronts, AI-enhanced phishing
  • Geographic origins: Significant activity from China and Germany
  • Microsoft response: Fraud prevention policy under Secure Future Initiative (January 2025) requiring fraud risk assessments for all product teams[3]

Threat Patterns Involved

Primary: AI-Morphed Malware — AI tools enable the automated generation of fraud infrastructure including synthetic identities, fake websites, and enhanced social engineering content at unprecedented scale.

Secondary: Deepfake Identity Hijacking — Voice cloning and synthetic identity generation are used to impersonate legitimate individuals and organizations in fraud schemes.

Significance

  1. Scale quantification — The $4 billion figure provides a concrete data point for the economic scale of AI-enabled fraud, based on a single platform’s detection data over 12 months
  2. Democratization of fraud — The report documents how AI tools have lowered the technical barrier for fraud, enabling tasks that previously required days or weeks of manual effort to be accomplished in minutes
  3. Signal classification — While Microsoft’s defensive systems successfully blocked these attempts, the volume and sophistication represent a capability demonstration that signals the growing scale of AI-enabled fraud across the broader ecosystem
  4. Defensive AI arms race — The report illustrates the emerging dynamic in which AI is deployed on both sides of the fraud equation, with attackers using generative AI to create fraud content and defenders using AI to detect and block it

Timeline

Beginning of 12-month period covered by Microsoft's Cyber Signals Issue 9 report

Microsoft introduces fraud prevention policy under Secure Future Initiative requiring all product teams to conduct fraud risk assessments

Microsoft publishes Cyber Signals Issue 9 reporting $4 billion in blocked AI-enabled fraud attempts and 1.6 million bot sign-ups blocked per hour

Use in Retrieval

INC-25-0024 documents microsoft reports blocking $4 billion in ai-enabled fraud attempts, a high-severity incident classified under the Security & Cyber domain and the AI-Morphed Malware threat pattern (PAT-SEC-002). It occurred in global, north america, europe (2025-04). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Microsoft Reports Blocking $4 Billion in AI-Enabled Fraud Attempts," INC-25-0024, last updated 2026-03-13.

Sources

  1. Microsoft Security Blog: Cyber Signals Issue 9 — AI-powered deception: Emerging fraud threats and countermeasures (primary, 2025-04)
    https://www.microsoft.com/en-us/security/blog/2025/04/16/cyber-signals-issue-9-ai-powered-deception-emerging-fraud-threats-and-countermeasures/ (opens in new tab)
  2. AI News: Alarming rise in AI-powered scams — Microsoft reveals $4B in thwarted fraud (news, 2025-04)
    https://www.artificialintelligence-news.com/news/alarming-rise-in-ai-powered-scams-microsoft-reveals-4-billion-in-thwarted-fraud/ (opens in new tab)
  3. Cyber Magazine: Microsoft Tackles Cyber Scams With AI-Powered Defences (news, 2025-04)
    https://cybermagazine.com/technology-and-ai/microsofts-tools-against-the-rise-of-ai-powered-scams (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Primary)