Skip to main content
TopAIThreats home TOP AI THREATS
AI Capability

Artificial General Intelligence (AGI)

A hypothetical AI system capable of performing any intellectual task that a human can, with the ability to transfer learning across domains without task-specific programming.

Definition

Artificial general intelligence (AGI) refers to a hypothetical AI system that matches or exceeds human cognitive ability across the full range of intellectual tasks, including reasoning, learning, planning, natural language understanding, and adaptation to novel situations. Unlike narrow AI systems, which are designed for specific tasks, AGI would possess the capacity to transfer knowledge between domains and solve problems it was not explicitly trained on. There is no scientific consensus on whether AGI is achievable, when it might be developed, or what precise capabilities would constitute general intelligence.

How It Relates to AI Threats

AGI is a central concept within the Systemic & Catastrophic threat domain and intersects with Human-AI Control concerns. The primary threat vectors associated with AGI include uncontrolled recursive self-improvement, where an AGI-level system iteratively enhances its own capabilities beyond human oversight, and strategic misalignment, where a highly capable general system pursues objectives that diverge from human intentions. AGI also raises concerns about loss of human agency, as systems capable of outperforming humans across multiple domains could shift decision-making authority away from human actors in ways that are difficult to reverse.

Why It Occurs

  • Continued scaling of large language models and multi-modal systems may approach general-purpose reasoning capabilities
  • Competitive dynamics between AI laboratories and between nations incentivise rapid capability advancement
  • There is no established scientific method for reliably measuring progress toward or proximity to general intelligence
  • Current alignment techniques have not been validated at capability levels approaching human-level general intelligence
  • The boundary between narrow AI and general AI is not clearly defined, creating risks of underestimating system capabilities

Real-World Context

No AGI system exists as of early 2026. However, the concept has become a central focus of AI industry strategy and governance discourse. Several leading AI laboratories, including OpenAI and DeepMind, have stated that building AGI is an explicit organisational objective. The term appears frequently in corporate strategy documents, investment prospectuses, and policy discussions, though definitions vary substantially across contexts. The 2023 Bletchley Declaration and subsequent international AI safety summits have addressed risks from frontier AI systems that could approach general capabilities. The EU AI Act includes provisions for general-purpose AI models that reflect concern about systems with broad, cross-domain capabilities.

Last updated: 2026-02-21