Remote Code Execution
A class of security vulnerability that allows an attacker to run arbitrary code on a target system from a remote location. In AI contexts, remote code execution risks arise when language models with code execution capabilities are manipulated through prompt injection or tool misuse to execute attacker-controlled commands.
Definition
Remote code execution (RCE) is a vulnerability class where an attacker causes a target system to execute arbitrary code without physical access. RCE vulnerabilities are consistently rated as critical severity in traditional cybersecurity because they grant attackers direct control over the compromised system. In AI systems, RCE risks emerge when language models are granted code execution capabilities — running Python scripts, executing shell commands, modifying files — and an attacker manipulates the model’s inputs to execute malicious code. This represents a convergence of traditional cybersecurity vulnerabilities with novel AI attack vectors.
How It Relates to AI Threats
RCE is a severity signal within the Security and Cyber Threats and Agentic and Autonomous Threats domains. When prompt injection attacks target AI systems with code execution privileges, the result can escalate from information disclosure to full remote code execution. AI coding assistants, agentic frameworks, and autonomous agents that can write and run code are particularly exposed. The attack chain typically follows: prompt injection → instruction override → malicious code generation → code execution → system compromise. This makes RCE the highest-severity outcome of agentic AI exploitation.
Why It Occurs
- AI coding assistants and agentic systems are increasingly granted shell access and code execution capabilities by design
- Prompt injection can redirect code generation from intended tasks to attacker-controlled commands
- Sandboxing and isolation of AI code execution environments is often incomplete or misconfigured
- The principle of least privilege is frequently violated when AI agents are given broad system access for convenience
- Tool-use frameworks may not adequately validate or constrain the parameters passed to code execution tools
Real-World Context
CVE-2025-53773 (GitHub Copilot) demonstrated that prompt injection in an AI coding assistant could lead to arbitrary code execution on a developer’s machine. CVE-2025-54135 and CVE-2025-54136 (Cursor IDE) revealed similar RCE pathways through manipulated tool-calling interfaces. These vulnerabilities highlight how AI systems that bridge the gap between natural language and code execution create novel RCE attack surfaces not present in traditional software.
Related Incidents
Claude Code 'Claudy Day' Vulnerability Chain — Silent Data Exfiltration via Prompt Injection
OpenClaw AI Agent Platform Hit by Critical Vulnerability and Supply Chain Campaign
Claude Code Remote Code Execution and API Key Exfiltration Vulnerabilities
Cursor AI Code Editor Shell Built-In Allowlist Bypass Enables Zero-Click RCE
Perplexity Comet AI Browser Enables Zero-Click Credential Theft via Prompt Injection
Three Chained Prompt Injection Vulnerabilities in Anthropic MCP Git Server
Related Threat Patterns
Related Terms
Last updated: 2026-04-03