INC-25-0007 confirmed critical Near Miss GitHub Copilot Remote Code Execution via Prompt Injection (CVE-2025-53773) (2025)
GitHub (Microsoft) developed and deployed GitHub Copilot (AI code completion, VS Code extension), harming Software developers using GitHub Copilot and Organizations with developers using the VS Code extension ; contributing factors included prompt injection vulnerability and inadequate access controls.
Incident Details
| Date Occurred | 2025-08 | Severity | critical |
| Evidence Level | primary | Impact Level | Sector |
| Failure Stage | Near Miss | ||
| Domain | Security & Cyber | ||
| Primary Pattern | PAT-SEC-001 Adversarial Evasion | ||
| Secondary Patterns | PAT-SEC-006 Prompt Injection Attack |, PAT-AGT-006 Tool Misuse & Privilege Escalation | ||
| Regions | global | ||
| Sectors | Corporate, Cross-Sector | ||
| Affected Groups | Developers & AI Builders | ||
| Exposure Pathways | Adversarial Targeting | ||
| Causal Factors | Prompt Injection Vulnerability, Inadequate Access Controls | ||
| Assets & Technologies | Large Language Models, Autonomous Agents | ||
| Entities | GitHub (Microsoft)(developer, deployer) | ||
| Harm Type | operational | ||
A critical remote code execution vulnerability (CVE-2025-53773) was discovered in GitHub Copilot's VS Code extension, enabling attackers to execute arbitrary code on developer machines through prompt injection in code context.
Incident Summary
In August 2025, security researchers disclosed CVE-2025-53773, a critical prompt injection vulnerability in GitHub Copilot’s Agent Mode that enabled remote code execution (RCE) with a CVSS score of 7.8.[1] The vulnerability exploited Copilot’s ability to create and write workspace files without user approval. Attackers could embed prompt injection payloads in public repository code comments, GitHub issues, or web content, which would instruct Copilot to modify the .vscode/settings.json configuration file to enable "chat.tools.autoApprove": true — effectively placing the AI assistant into unsupervised execution mode (“YOLO mode”).[1] Once auto-approve was enabled, Copilot could execute privileged shell commands without user intervention.[2] The vulnerability was reported through responsible disclosure on June 29, 2025, and patched by Microsoft in the August 2025 Patch Tuesday release.[3]
Key Facts
- CVE-2025-53773 was assigned a CVSS v3.1 base score of 7.8 (High).[3]
- GitHub Copilot in Agent Mode could create and write files in the workspace without user approval.[1]
- Malicious prompt injections were delivered through source code comments, project files, GitHub issues, or web content.[1]
- The attack chain instructed Copilot to add
"chat.tools.autoApprove": trueto.vscode/settings.json.[1] - Once auto-approve was enabled, Copilot could execute privileged shell commands without user confirmation.[2]
- Researchers demonstrated the vulnerability could enable creation of self-propagating “AI viruses” (dubbed “ZombAI” networks) that spread through infected repositories.[1]
- The vulnerability was reported via responsible disclosure on June 29, 2025.[1]
- Microsoft patched the vulnerability in the August 2025 Patch Tuesday release.[3]
- The vulnerability affected millions of developers using GitHub Copilot.[4]
Threat Patterns Involved
Primary: Adversarial Evasion — The attack exploited prompt injection to manipulate an AI coding assistant into modifying its own security configuration, bypassing the user approval mechanism that was intended to prevent unauthorized actions.[1] The injection payloads were delivered through normal development artifacts (code comments, issues), making them difficult to detect through traditional security controls.[2]
Secondary: Tool Use and API Exploitation — Once auto-approve was enabled, the AI agent could execute arbitrary shell commands, demonstrating how agentic AI systems with tool-use capabilities can be weaponized when their safety guardrails are compromised through adversarial manipulation.[1]
Significance
CVE-2025-53773 demonstrates a critical escalation in prompt injection risk: the ability to achieve remote code execution through an AI coding assistant used by millions of developers.[4] The attack’s self-propagating potential — where compromised repositories could infect other developers’ workstations — represents a novel supply chain attack vector enabled by AI tools.[1] The vulnerability illustrates how agentic AI systems with file system and shell access create fundamentally new attack surfaces that extend traditional prompt injection from information disclosure to full system compromise.[2]
Glossary Terms
Use in Retrieval
INC-25-0007 documents github copilot remote code execution via prompt injection (cve-2025-53773), a critical-severity incident classified under the Security & Cyber domain and the Adversarial Evasion threat pattern (PAT-SEC-001). It occurred in global (2025-08). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "GitHub Copilot Remote Code Execution via Prompt Injection (CVE-2025-53773)," INC-25-0007, last updated 2026-02-21.
Sources
- Embrace The Red: GitHub Copilot Remote Code Execution via Prompt Injection (primary, 2025-08)
https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/ (opens in new tab) - GBHackers: GitHub Copilot RCE Vulnerability via Prompt Injection (news, 2025-08)
https://gbhackers.com/github-copilot-rce-vulnerability/ (opens in new tab) - NVD: CVE-2025-53773 (primary, 2025-08)
https://nvd.nist.gov/vuln/detail/CVE-2025-53773 (opens in new tab) - CybersecurityNews: GitHub Copilot RCE Vulnerability (news, 2025-08)
https://cybersecuritynews.com/github-copilot-rce-vulnerability/ (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)