Systemic Risk
The risk that failure, disruption, or unintended behaviour in one component of the AI ecosystem propagates across interconnected systems and institutions, causing widespread harm that exceeds the sum of individual failures.
Definition
Systemic risk in the context of artificial intelligence refers to the potential for localised failures or disruptions to cascade through interconnected technological, economic, and social systems, producing consequences that are disproportionately larger than the initial triggering event. Unlike isolated risks affecting individual systems, systemic risk emerges from dependencies, feedback loops, and structural vulnerabilities that connect AI-enabled infrastructure across sectors. The concept draws from financial systemic risk theory but extends to technological monocultures, shared model dependencies, and the concentration of AI capabilities among a small number of providers. Systemic risk increases as critical infrastructure becomes more dependent on AI systems with common failure modes.
How It Relates to AI Threats
Systemic risk is a defining characteristic of the Systemic-Catastrophic threat domain. It connects to infrastructure dependency collapse, where widespread reliance on shared AI systems creates single points of failure, and to accumulative risk and trust erosion, where gradual degradation of institutional reliability compounds into systemic instability. Within the taxonomy, systemic risk differs from isolated threat patterns in that its impact is determined not by the severity of any single failure but by the interconnectedness of the systems affected. A vulnerability in a widely deployed foundation model, for example, could simultaneously compromise applications across healthcare, finance, and governance.
Why It Occurs
- Critical infrastructure increasingly depends on a small number of shared AI models and providers
- Interconnected systems create hidden dependencies that amplify localised failures across sectors
- Homogeneous AI architectures introduce correlated failure modes with no independent fallback
- Regulatory frameworks lag behind the pace of AI integration into essential services
- Competitive dynamics discourage redundancy and diversification in favour of cost efficiency
Real-World Context
No incidents in the TopAIThreats database are currently classified under systemic risk, though the conditions for such events are intensifying. The EU AI Act explicitly addresses systemic risk for general-purpose AI models, requiring providers of high-impact systems to conduct systemic risk assessments. The 2024 CrowdStrike outage, while not AI-specific, demonstrated how dependency on a single software provider could cascade across airlines, hospitals, and financial institutions globally. AI governance bodies increasingly warn that concentration in foundation model providers creates analogous systemic vulnerabilities.
Related Threat Patterns
Related Terms
Last updated: 2026-02-14