Least Privilege
A security principle requiring that any entity — user, process, or AI agent — is granted only the minimum permissions necessary to perform its intended function and no more. Applied to AI systems, least privilege constrains model access to tools, data, APIs, and system resources to reduce the blast radius of compromise or misuse.
Definition
The principle of least privilege dictates that every component in a system should operate with the smallest set of permissions required to complete its legitimate tasks. Originally formalised in computer security by Jerome Saltzer and Michael Schroeder in 1975, the principle is now a foundational control in cybersecurity frameworks including NIST, ISO 27001, and CIS Controls. When applied to AI systems, least privilege governs which tools an AI agent can invoke, which files it can read or write, which APIs it can call, which data it can access, and what actions it can take autonomously versus requiring human approval.
How It Relates to AI Threats
Least privilege is a primary mitigation across the Security and Cyber Threats, Agentic and Autonomous Threats, and Human-AI Control domains. When AI agents are granted excessive permissions — broad file system access, unrestricted API keys, administrative shell access — the impact of any successful attack or failure is amplified. A prompt injection attack against an agent with least-privilege constraints may only access a narrow scope of data; the same attack against an over-privileged agent could result in full system compromise, data exfiltration, or remote code execution. Inadequate access controls are among the most frequently cited causal factors in AI incidents.
Why It Occurs
- Developers often grant AI agents broad permissions during development and fail to restrict them before deployment
- AI agent frameworks may default to permissive access models for ease of integration
- The dynamic nature of agentic tasks makes it difficult to predict the minimum permission set in advance
- Organisational pressure to ship AI features quickly deprioritises security hardening
- Multi-agent systems create complex permission chains where least privilege is difficult to enforce end-to-end
Real-World Context
The OWASP Top 10 for LLM Applications identifies excessive agency (LLM08) as a top vulnerability, directly linked to violations of least privilege. NIST’s AI Risk Management Framework recommends access control as a core governance measure. Multiple AI security incidents have demonstrated that over-privileged AI agents amplify the impact of prompt injection from information leakage to system-level compromise.
Related Threat Patterns
Related Terms
Last updated: 2026-04-03