Superintelligence
A hypothetical AI system that surpasses human cognitive ability across virtually all domains, including reasoning, planning, and social intelligence.
Definition
Superintelligence refers to a hypothetical form of artificial intelligence that substantially surpasses human cognitive ability across virtually all domains, including scientific reasoning, strategic planning, social intelligence, and creative problem-solving. The concept encompasses multiple possible forms: a single system of vastly superior intelligence, a collective of coordinated AI systems, or a human-machine hybrid that exceeds unaugmented human capability. Superintelligence remains a theoretical construct, but its implications for existential risk and global governance have made it a central subject of AI safety research and policy discourse.
How It Relates to AI Threats
Superintelligence is the defining concern within Systemic & Catastrophic threats. It relates directly to uncontrolled recursive self-improvement, where a system capable of enhancing its own capabilities could rapidly reach superintelligent levels. It also underpins strategic misalignment scenarios, where a superintelligent system pursues objectives that diverge from human values or intentions in ways that are difficult or impossible to correct. The fundamental challenge is that a system significantly more capable than humans may be inherently resistant to human oversight, control, or correction.
Why It Occurs
- Recursive self-improvement could enable an AI system to rapidly surpass human-level intelligence
- Advances in compute, data availability, and algorithmic efficiency may cumulatively approach critical capability thresholds
- Competitive dynamics between AI developers and between nations may accelerate development timelines
- Current alignment methods have not been demonstrated to be robust at scales approaching or exceeding human intelligence
- The difficulty of specifying complex human values in formal terms creates persistent misalignment risks
Real-World Context
No superintelligent AI system exists as of early 2026. The concept was formally articulated by Nick Bostrom in his 2014 book Superintelligence: Paths, Dangers, Strategies, which catalysed institutional attention to existential risk from advanced AI. Major AI laboratories including OpenAI, Anthropic, and DeepMind have cited superintelligence risk as a motivation for their safety research programmes. The 2023 Bletchley Declaration, signed by 28 nations, acknowledged the potential for frontier AI systems to pose catastrophic risks, reflecting growing governmental engagement with the policy implications of advanced AI development.
Related Threat Patterns
Last updated: 2026-02-14