Skip to main content
TopAIThreats home TOP AI THREATS
Technical Attack

Attack Surface

The totality of entry points, interfaces, and pathways through which an adversary can attempt to interact with, extract data from, or inject inputs into an AI system. In machine learning contexts, the attack surface extends beyond traditional software boundaries to include training pipelines, model APIs, prompt interfaces, tool integrations, and data ingestion channels.

Definition

An attack surface is the complete set of points where an unauthorized user can attempt to enter data into or extract data from an AI system. For traditional software, attack surfaces include network ports, APIs, and user interfaces. AI systems introduce additional attack surface components: training data pipelines, model weight files, prompt interfaces, embedding stores, retrieval-augmented generation (RAG) document corpora, tool-calling endpoints, and agent memory stores. The larger the attack surface, the more opportunities an adversary has to find and exploit a vulnerability.

How It Relates to AI Threats

Within the Security and Cyber Threats domain, attack surface analysis is foundational to threat modeling for AI deployments. Every new capability added to an AI system — tool access, persistent memory, multi-agent communication, code execution — expands the attack surface. Prompt injection attacks exploit the natural language interface as an attack surface. Supply chain attacks target the training pipeline and dependency graph. Model inversion attacks exploit prediction APIs. Understanding and minimising the attack surface is the first step in securing any AI system.

Why It Occurs

  • AI systems combine multiple software components (model serving, vector databases, API gateways, tool servers) each with its own attack surface
  • Natural language interfaces are inherently difficult to validate compared to structured inputs
  • Agentic AI systems grant models access to tools, files, and APIs, dramatically expanding the attack surface beyond the model itself
  • Rapid deployment cycles often prioritise functionality over security review of new interfaces
  • Third-party integrations (MCP servers, plugins, retrieval sources) introduce attack surface components outside the deployer’s direct control

Real-World Context

The expansion of AI attack surfaces has been documented in multiple CVE advisories targeting AI-integrated development tools. CVE-2025-53773 (GitHub Copilot) and CVE-2025-54135/54136 (Cursor IDE) demonstrated that tool-use interfaces in coding assistants created exploitable attack surfaces. The OWASP Top 10 for LLM Applications and MITRE ATLAS both emphasise attack surface reduction as a primary defensive strategy for AI systems.

Last updated: 2026-04-03