Dual-Use
A characteristic of technologies, tools, or knowledge developed for beneficial purposes that can also be repurposed or exploited for harmful applications, a concept with particular relevance to AI capabilities in cybersecurity, biology, and information manipulation.
Definition
Dual-use refers to the inherent property of certain technologies, research outputs, or capabilities that serve legitimate beneficial functions while simultaneously possessing the potential to be applied for harmful purposes. In artificial intelligence, dual-use concerns are pervasive because the same capabilities that enable medical diagnosis, scientific discovery, and cybersecurity defence can also facilitate disinformation generation, vulnerability exploitation, and weapons development. The concept originates from nuclear and biological weapons governance but has gained renewed urgency as AI systems demonstrate broad applicability across both constructive and destructive applications. Managing dual-use risk requires balancing open scientific progress against the potential for misuse.
How It Relates to AI Threats
Dual-use is relevant across multiple threat domains, most prominently Systemic-Catastrophic and Security-Cyber. In the taxonomy, it connects to ai-assisted biological threat design, where protein folding tools and language models developed for scientific research could assist in pathogen engineering, and to automated vulnerability discovery, where defensive security tools can be repurposed for offensive cyberattacks. The dual-use nature of AI complicates governance because restricting access to capabilities inhibits beneficial applications, while unrestricted access enables harmful ones. This tension is a defining challenge of AI safety policy.
Why It Occurs
- AI capabilities are general-purpose by design, making beneficial and harmful applications technically indistinguishable
- Open publication of research and model weights enables both legitimate researchers and malicious actors
- Defensive cybersecurity tools and techniques are functionally identical to offensive ones
- Market incentives favour maximising capability without proportional investment in misuse prevention
- International coordination on restricting dual-use AI remains fragmented and unenforceable
Real-World Context
Incident INC-23-0006 illustrates dual-use dynamics in the cybersecurity domain, where AI capabilities intended for defence were leveraged for offensive purposes. The Wassenaar Arrangement, originally designed for conventional arms export controls, has been extended to cover certain dual-use software, though its applicability to AI remains debated. The U.S. Executive Order on AI Safety and the EU AI Act both address dual-use risks through evaluation requirements and export controls. AI laboratories have adopted responsible disclosure practices and staged release strategies to mitigate dual-use harms while preserving research openness.
Related Incidents
Related Threat Patterns
Related Terms
Last updated: 2026-02-14