Skip to main content
TopAIThreats home TOP AI THREATS
INC-26-0020 confirmed high Systemic Risk

AI-Generated Code Vulnerability Surge: 74 Confirmed CVEs Traced to Coding Assistants (2026)

Attribution

Anthropic, GitHub (Microsoft), Cognition (Devin), Cursor, Google developed and Open-source software developers using AI coding assistants deployed Multiple AI coding assistants (Claude Code, GitHub Copilot, Devin, Cursor, Google Jules, Aether), harming Users of open-source software containing AI-introduced vulnerabilities and Maintainers of projects receiving AI-generated contributions ; possible contributing factors include insufficient safety testing, over-automation, and competitive pressure.

Incident Details

Last Updated 2026-03-29

Georgia Tech SSLab's Vibe Security Radar project tracked 74 confirmed CVEs in open-source software definitively traced to AI coding assistants between May 2025 and March 2026, with an accelerating monthly trend of 6, 15, and 35 new CVEs in January, February, and March 2026 respectively. Claude Code accounted for 49 of the 74 confirmed vulnerabilities.

Incident Summary

Georgia Tech SSLab’s Vibe Security Radar project confirmed 74 CVEs in open-source software definitively traceable to AI coding assistants as of March 20, 2026, identified from analysis of 43,849 security advisories.[1] The monthly discovery rate accelerated through the first quarter of 2026: 6 CVEs in January, 15 in February, and 35 in March.[1] Claude Code accounted for 49 of the 74 confirmed vulnerabilities (including 11 critical-severity), followed by GitHub Copilot with 15, Devin and Google Jules with 2 each, and Aether, Cursor, Atlassian Rovo, and Roo Code with 1 each.[1] Researcher Hanqing Zhao characterized the 74 figure as a “lower bound,” estimating the real total is 5 to 10 times higher due to detection blind spots where AI attribution is stripped or hidden.[1] In a separate December 2025 study, security firm Tenzai found 69 vulnerabilities across five AI coding platforms, with business logic and authorization failures dominating the findings.[2]

Key Facts

  • Total confirmed CVEs: 74 from 43,849 advisories analyzed between May 2025 and March 20, 2026[1]
  • Monthly acceleration: 6 (January), 15 (February), 35 (March 2026)[1][3]
  • Tool breakdown (cumulative): Claude Code 49 (11 critical), GitHub Copilot 15 (2 critical), Devin 2, Google Jules 2 (1 critical), Aether 2, Cursor 2, Atlassian Rovo 1, Roo Code 1[1]
  • March 2026 breakdown: 27 Claude Code, 4 Copilot, 2 Devin, 1 Aether, 1 Cursor[1]
  • Estimated real total: 400-700 CVEs (5-10x the confirmed count) due to detection limitations[1]
  • Claude Code scale: Over 4% of public GitHub commits, with 15 million total commits and 30.7 billion lines added in the prior 90 days[1]
  • First confirmed AI-authored CVEs: CVE-2025-55526 (CVSS 9.1, directory traversal in n8n-workflows) and GHSA-3j63-5h8p-gf7c traced to Claude Code in August 2025[1]
  • Independent study: Tenzai identified 69 vulnerabilities across five AI coding platforms in December 2025, with business logic and authorization failures as the dominant categories[2]

Threat Patterns Involved

Primary: Overreliance & Automation Bias — The 74 confirmed CVEs reflect a systemic pattern of developers accepting AI-generated code without adequate security review. AI coding assistants produce syntactically correct, functionally plausible code that passes casual inspection, creating automation bias where human reviewers defer to the AI’s output rather than scrutinizing it for security flaws.

Secondary: Automated Vulnerability Discovery — While this pattern typically describes AI systems used to find vulnerabilities offensively, this incident represents the inverse: AI coding assistants autonomously introducing vulnerabilities into codebases at scale, which were subsequently discovered through traditional security advisory processes.

Significance

  1. Systemic risk at ecosystem scale — Claude Code alone accounts for over 4% of public GitHub commits with 30.7 billion lines added in 90 days. If even a small fraction of AI-generated code contains vulnerabilities, the absolute volume of introduced security flaws across the open-source ecosystem is substantial
  2. Accelerating trend — The 6 → 15 → 35 monthly CVE progression suggests the problem is worsening, not stabilizing, as AI coding assistant adoption grows and researchers improve detection methods
  3. Detection gap — The estimated 5-10x multiplier between confirmed and actual AI-generated vulnerabilities indicates that current security advisory processes are not equipped to attribute vulnerabilities to AI tools, particularly in projects where AI involvement is not disclosed
  4. Tool-specific concentration — Claude Code’s dominant share (49 of 74 confirmed CVEs) likely reflects its market adoption rather than inherently lower code quality, but raises questions about whether AI coding tools should incorporate security validation into their generation pipelines
  5. Two complementary datasets — The Georgia Tech tracking project (CVE-focused, longitudinal) and Tenzai benchmark (controlled testing across platforms) together suggest that AI-generated code vulnerabilities are both measurable in production and reproducible in controlled environments

Timeline

Georgia Tech SSLab identifies first two CVEs definitively traced to Claude Code: CVE-2025-55526 (CVSS 9.1, directory traversal in n8n-workflows) and GHSA-3j63-5h8p-gf7c

6 new CVEs from AI coding assistants identified in January 2026

15 new CVEs identified in February 2026

35 new CVEs identified in March 2026 (27 Claude Code, 4 Copilot, 2 Devin, 1 Aether, 1 Cursor)

Georgia Tech publishes cumulative findings: 74 confirmed CVEs from 43,849 advisories analyzed

Outcomes

Other:
Georgia Tech estimates the real total is 5 to 10 times higher (400-700 CVEs) due to detection blind spots where AI traces are stripped

Use in Retrieval

INC-26-0020 documents AI-Generated Code Vulnerability Surge: 74 Confirmed CVEs Traced to Coding Assistants, a high-severity incident classified under the Human-AI Control domain and the Automation Bias in AI: Definition, Examples, and Prevention threat pattern (PAT-CTL-004). It occurred in North America, Europe (2026-01). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "AI-Generated Code Vulnerability Surge: 74 Confirmed CVEs Traced to Coding Assistants," INC-26-0020, last updated 2026-03-29.

Sources

  1. AI Coding Assistants Not Making Code More Secure (news, 2026-03-26)
    https://www.theregister.com/2026/03/26/ai_coding_assistant_not_more_secure/ (opens in new tab)
  2. AI Coding Platforms: Vulnerabilities Scanners Miss (analysis, 2026-01-21)
    https://www.pixee.ai/weekly-briefings/ai-coding-platforms-vulnerabilities-scanners-miss-2026-01-21 (opens in new tab)
  3. AI-Generated Code Vulnerabilities Surge (news, 2026-03)
    https://www.infosecurity-magazine.com/news/ai-generated-code-vulnerabilities/ (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Corroborated)