Skip to main content
TopAIThreats home TOP AI THREATS
Regulatory Concept

International Humanitarian Law

The body of international law governing armed conflict, including rules on distinction, proportionality, and precaution, whose application to AI-enabled weapons systems raises fundamental questions of compliance and accountability.

Definition

International humanitarian law (IHL), also known as the laws of armed conflict or the law of war, is the body of treaty and customary international law that regulates the conduct of armed hostilities. Its core principles include distinction (between combatants and civilians), proportionality (prohibiting attacks whose civilian harm would be excessive relative to the anticipated military advantage), precaution (requiring feasible measures to minimize civilian harm), and humanity (prohibiting unnecessary suffering). IHL applies to all means and methods of warfare, including novel technologies. The emergence of AI-enabled weapons systems — capable of selecting and engaging targets with reduced or absent human control — creates unprecedented challenges for the interpretation and enforcement of these principles.

How It Relates to AI Threats

International humanitarian law is directly relevant to the Systemic and Catastrophic Threats domain. Under the lethal autonomous weapon systems sub-category, IHL provides the primary legal framework through which the international community evaluates whether autonomous weapons can comply with existing legal obligations. Key questions include whether an AI system can make the contextual judgments required by the principle of distinction, whether algorithmic targeting processes can satisfy proportionality assessments that require weighing incommensurable values, and whether meaningful human control is a prerequisite for legal compliance. The accountability gap — determining who bears legal responsibility when an autonomous weapon causes unlawful harm — remains unresolved under current IHL frameworks.

Why It Occurs

  • AI weapons systems make targeting decisions at speeds that preclude real-time human legal review of individual engagements
  • The contextual judgment required by IHL principles of distinction and proportionality may exceed current AI capabilities
  • Accountability structures in IHL assume identifiable human decision-makers, which autonomous systems disrupt
  • States developing autonomous weapons have divergent interpretations of what constitutes meaningful human control
  • International negotiation processes at the UN Convention on Certain Conventional Weapons have not produced binding regulation

Real-World Context

While no specific incidents in the TopAIThreats taxonomy document IHL violations by autonomous weapons, the legal framework is at the centre of active international debate. The International Committee of the Red Cross has called for new legally binding rules on autonomous weapons. Discussions within the UN Convention on Certain Conventional Weapons’ Group of Governmental Experts have continued since 2014 without producing a binding treaty. States including Austria, Costa Rica, and New Zealand have advocated prohibition, while major military powers have generally opposed binding restrictions. The growing deployment of AI-assisted targeting systems in active conflicts underscores the urgency of resolving how IHL applies to autonomous decision-making.

Last updated: 2026-02-14