Biological Threat
The risk of AI systems being used to design, enhance, or disseminate biological agents capable of causing widespread harm to human health or ecosystems.
Definition
Biological threat in the AI context refers to the risk that artificial intelligence systems could be used to design, enhance, acquire, or deploy biological agents — including pathogens, toxins, or engineered organisms — capable of causing significant harm to human health, agriculture, or ecosystems. AI capabilities relevant to biological threats include protein structure prediction, gene sequence design, automated synthesis pathway planning, and the generation of detailed procedural knowledge. While these capabilities have enormous beneficial applications in medicine and biotechnology, they also lower the barriers to creating dangerous biological agents by reducing the specialised knowledge and laboratory expertise previously required. The dual-use nature of these AI capabilities makes biological threat one of the most consequential risks in AI safety discourse.
How It Relates to AI Threats
Biological threats represent a severe concern within the Systemic and Catastrophic Threats domain. In the AI-assisted biological threat design sub-category, the convergence of AI capabilities with biotechnology creates pathways by which non-state actors or rogue states could develop biological agents that would previously have required state-level resources and expertise. Large language models can provide detailed information about pathogen characteristics, synthesis routes, and delivery mechanisms. AI protein design tools could theoretically be used to engineer novel agents or enhance the transmissibility, lethality, or immune evasion properties of existing pathogens. The catastrophic potential of biological threats places them among the highest-priority AI safety concerns.
Why It Occurs
- AI protein prediction and gene design tools dramatically lower the expertise required for pathogen engineering
- Large language models encode dual-use biological knowledge that can be elicited through careful prompting
- Declining costs of DNA synthesis make it increasingly feasible to produce AI-designed biological sequences
- Existing biosecurity screening systems for DNA synthesis orders have significant gaps and inconsistencies
- International biological weapons governance frameworks predate AI capabilities and lack adequate enforcement mechanisms
Real-World Context
Evaluations by AI safety organisations have demonstrated that current AI systems can provide actionable information relevant to biological weapon development, though with significant limitations. The U.S. Executive Order on AI Safety directed federal agencies to assess the biosecurity risks of frontier AI models. AI developers including Anthropic, OpenAI, and Google DeepMind have implemented safety evaluations specifically targeting biological threat capabilities. The Nuclear Threat Initiative and the Johns Hopkins Center for Health Security have published assessments of how AI could affect the biological threat landscape, recommending strengthened DNA synthesis screening and international coordination on dual-use AI governance.
Related Threat Patterns
Related Terms
Last updated: 2026-02-14