Agentic AI
AI systems that autonomously plan and execute multi-step actions with minimal human oversight.
Definition
Agentic AI refers to artificial intelligence systems designed to operate with a high degree of autonomy, capable of setting sub-goals, planning sequences of actions, executing tasks across tools and environments, and adapting their behaviour based on feedback — all with minimal or no human intervention during execution. Unlike traditional AI models that respond to single prompts, agentic systems maintain persistent state, interact with external services, and make consequential decisions over extended time horizons. The term encompasses a spectrum of autonomy, from tool-augmented language models to fully autonomous multi-agent systems operating in open-ended environments.
How It Relates to AI Threats
Agentic AI is a primary concern within the Agentic & Autonomous threat domain. As systems gain the ability to act independently, they introduce risks of goal drift — where an agent’s pursued objectives diverge from the operator’s intent — and tool misuse, where agents exploit available permissions or APIs beyond their intended scope. Agent-to-agent propagation compounds these risks when multiple autonomous systems interact, potentially amplifying errors or adversarial behaviours without human checkpoints. Within Human-AI Control, agentic systems challenge existing oversight frameworks because the speed and complexity of their operations can exceed human capacity for meaningful review.
Why It Occurs
- Large language models are increasingly deployed with tool-use capabilities, memory, and multi-step planning frameworks
- Competitive pressure incentivises deployment of autonomous agents before adequate safety evaluations are completed
- The operational advantage of speed and scale makes agentic architectures attractive for enterprise and security applications
- Current alignment techniques were developed for single-turn interactions and do not fully generalise to persistent, multi-step agent behaviour
- Evaluation frameworks for agentic systems remain immature relative to the pace of capability development
Real-World Context
The AI-orchestrated cyber espionage campaign documented in INC-25-0001 demonstrated how agentic AI systems can autonomously conduct multi-stage operations — including reconnaissance, vulnerability discovery, and data exfiltration — with minimal human direction. This incident highlighted the gap between the operational capabilities of agentic systems and the existing oversight mechanisms designed to contain them. Regulatory bodies including the EU AI Office and NIST have begun developing specific frameworks for evaluating and governing agentic AI deployments.
Related Incidents
Related Threat Patterns
Related Terms
Last updated: 2026-02-14