Skip to main content
TopAIThreats home TOP AI THREATS
INC-26-0078 confirmed high Signal

International AI Safety Report 2026 — 100+ Experts Warn of Escalating Risks, Safeguards 'Will Likely Fail' (2026)

Attribution

Various AI developers developed and Various deployers globally deployed General-purpose AI systems, harming Global population at risk from inadequate AI safeguards ; possible contributing factors include insufficient safety testing, competitive pressure, and regulatory gap.

Incident Details

Last Updated 2026-03-29

The International AI Safety Report, led by Yoshua Bengio with 100+ experts from 30+ countries, warned that AI voices are mistaken for human 80% of the time, criminal groups are actively using GPAI, AI can help create biological and chemical threats, and that existing safeguards 'will likely fail to prevent some incidents.'

Incident Summary

The International AI Safety Report, published on February 3, 2026, brought together over 100 experts from more than 30 countries under the leadership of Turing Award laureate Yoshua Bengio to assess the current state of AI risks.[1] The report’s key findings included that AI-generated voices are mistaken for human voices 80% of the time, criminal groups are actively using general-purpose AI (GPAI) for fraud and other operations, AI systems can help create biological and chemical threats, and — most consequentially — that existing safeguards “will likely fail to prevent some incidents.”[2][3] The explicit acknowledgment by a panel of this caliber that current safeguards are insufficient represents a significant departure from the measured language typical of consensus reports, signaling that the expert community has concluded that AI risks are escalating faster than mitigation measures are being deployed. The report serves as both a status assessment and a warning that the gap between AI capabilities and safety measures is widening.

Key Facts

  • Authors: 100+ experts from 30+ countries, led by Yoshua Bengio[1]
  • Voice deception: AI voices mistaken for human 80% of the time[2]
  • Criminal use: Criminal groups actively using GPAI[2]
  • CBRN risk: AI can help create biological and chemical threats[2]
  • Safeguard assessment: “Will likely fail to prevent some incidents”[3]

Threat Patterns Involved

Primary: Accumulative Risk & Trust Erosion — The report documents the accumulation of AI risks across multiple domains — voice deception, criminal use, CBRN threats — and explicitly warns that safeguards will likely fail, representing an authoritative assessment that the accumulation of risks is outpacing the development of mitigations.

Significance

  1. Expert consensus on safeguard failure — The explicit statement that safeguards “will likely fail to prevent some incidents” from 100+ leading experts represents the strongest consensus warning to date that AI safety measures are inadequate for the current capability level
  2. 80% voice deception rate — The finding that AI voices fool humans 80% of the time has immediate implications for phone-based fraud, authentication systems, and any interaction that relies on voice as an identity signal
  3. Criminal groups actively using GPAI — The confirmation that criminal organizations are not merely experimenting with but actively deploying general-purpose AI shifts the threat from hypothetical to operational
  4. CBRN capability confirmation — The expert panel’s confirmation that AI can assist with biological and chemical threat creation adds authoritative weight to capability concerns that have been debated primarily in theoretical terms

Timeline

International AI Safety Report published by 100+ experts from 30+ countries

Report warns safeguards 'will likely fail to prevent some incidents'

Outcomes

Regulatory Action:
Report published as input to international policy discussions

Use in Retrieval

INC-26-0078 documents International AI Safety Report 2026 — 100+ Experts Warn of Escalating Risks, Safeguards 'Will Likely Fail', a high-severity incident classified under the Systemic Risk domain and the Accumulative Risk & Trust Erosion threat pattern (PAT-SYS-001). It occurred in Global (2026-02-03). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "International AI Safety Report 2026 — 100+ Experts Warn of Escalating Risks, Safeguards 'Will Likely Fail'," INC-26-0078, last updated 2026-03-29.

Sources

  1. International AI Safety Report 2026 (research, 2026-02-03)
    https://internationalaisafetyreport.org (opens in new tab)
  2. 100+ experts warn of escalating AI risks (news, 2026-02)
    https://asisonline.org (opens in new tab)
  3. AI safety report: safeguards will likely fail (analysis, 2026-02)
    https://insideprivacy.com (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Primary)