Skip to main content
TopAIThreats home TOP AI THREATS

AI Threats to Law Enforcement & Public Safety

How AI-enabled threats affect law enforcement and public safety — through facial recognition errors, predictive policing bias, surveillance overreach, automated decision failures, and AI-assisted criminal operations.

21 Incidents
90% High / Critical
6 Privacy & Surveillance

AI-enabled threats to law enforcement and public safety include facial recognition wrongful arrests disproportionately affecting people of color, predictive policing algorithms that reinforce over-policing patterns, mass surveillance deployment without proportionality assessments, automated justice system errors in bail and sentencing, and AI-assisted criminal operations including deepfake fraud. These threats affect police agencies, courts, corrections, border control, transportation safety, and emergency services.

Law enforcement and public safety face a distinct threat profile because agencies are simultaneously deploying high-consequence AI systems (facial recognition, predictive policing, risk assessment) and confronting AI-augmented criminal threats. The vast majority of documented incidents are classified high or critical severity. Privacy & Surveillance and Discrimination & Social Harm together account for over half of all primary threat domains in this sector.

Use this page to brief leadership, inform policing and public safety risk assessments, and explore documented incidents affecting law enforcement operations.

Who this page is for

  • Police chiefs and law enforcement leadership
  • Public safety directors and emergency management
  • Criminal justice administrators and court officials
  • Civilian oversight bodies and inspectors general
  • Civil liberties advocates and policy researchers

At a glance

  • Severity profile: 90% of documented incidents classified high or critical severity. Privacy & Surveillance and Discrimination & Social Harm together account for over half of primary threat domains.
  • Primary threats: Facial recognition wrongful arrests, predictive policing bias, mass surveillance without proportionality, automated justice system errors, AI-assisted criminal operations
  • Key domains: Privacy & Surveillance, Discrimination & Social Harm, Human-AI Control, Information Integrity
  • Regulatory exposure: EU AI Act (prohibited/high-risk law enforcement categories), Fourth Amendment constraints, biometric data laws (BIPA, GDPR)

How AI Threats Appear in Law Enforcement & Public Safety

Law enforcement AI risks cluster around five recurring threat patterns, each documented through real-world incidents in the TopAIThreats database.

Recurring AI threat patterns in law enforcement and public safety
Threat PatternPrimary DomainKey Indicator
Facial recognition wrongful arrestsDiscrimination & Social HarmFalse positive matches leading to detention of innocent individuals
Predictive policing biasDiscrimination & Social HarmEnforcement patterns disproportionately affecting specific communities
Surveillance overreachPrivacy & SurveillanceAI surveillance deployed without proportionality assessments or oversight
Automated justice system errorsHuman-AI ControlRisk scores or recommendations influencing sentencing, bail, or parole without adequate human review
AI-assisted criminal operationsSecurity & CyberCriminals using AI for deepfake fraud, social engineering, or attack planning
  • Facial recognition wrongful arrests — AI-powered facial recognition producing false positive matches that lead to wrongful arrests and detention, disproportionately affecting people of color. The Robert Williams wrongful arrest was among the first publicly documented cases, with subsequent incidents including NYPD wrongful arrest and wrongful arrest after 108 days detention demonstrating the pattern persists.
  • Predictive policing bias — AI systems used for crime prediction, risk assessment, and resource allocation that produce systematically biased outcomes against specific communities, reinforcing existing patterns of over-policing through feedback loops in training data. The COMPAS recidivism algorithm remains the defining example of algorithmic bias in criminal justice.
  • Surveillance overreachMass surveillance amplification through AI-powered facial recognition, behavioral analysis, and communications monitoring deployed without adequate legal frameworks. Clearview AI pioneered mass scraping of facial images for law enforcement use, and Brazil’s deployment of FRT across a million schoolchildren shows surveillance expanding beyond traditional law enforcement contexts.
  • Automated justice system errors — AI systems making or heavily influencing determinations about bail, sentencing, parole, and threat assessment, with over-automation reducing human judgment in decisions affecting individual liberty.
  • AI-assisted criminal operations — AI tools enabling more sophisticated criminal activity, from deepfake-based fraud and social engineering to AI-assisted attack planning, as seen in the Las Vegas Cybertruck ChatGPT explosives incident.

Public safety implications

AI creates public safety risks beyond traditional policing:

  • Autonomous vehicle incidents — Self-driving systems involved in fatal and injurious crashes, requiring new investigation methodologies and regulatory frameworks
  • AI-generated false crime alerts — Systems like CrimeRadar producing false alerts that misallocate emergency resources and erode public trust
  • AI-facilitated child exploitation — AI tools used to generate or distribute CSAM, requiring law enforcement to develop new detection capabilities as seen in Operation Alice

Relevant AI Threat Domains

Bias & discrimination

Surveillance & privacy

Decision automation & control

Information integrity


What to Watch For

These are the most critical warning signs that law enforcement agencies should monitor for AI-related risks, with actionable guidance for each.

  • Facial recognition used without accuracy auditing across demographic groupsWhat police leadership can do: Mandate bias and fairness auditing for all facial recognition systems before operational deployment. Require human review of all matches before enforcement action. Publish accuracy rates by demographic group.

  • Predictive systems used in law enforcement without bias auditingWhat agency heads can do: Mandate independent audits of all AI systems that affect enforcement, eligibility, or resource allocation decisions. Establish civilian oversight mechanisms with access to algorithmic audit results.

  • Surveillance technology deployed without proportionality assessmentsWhat oversight bodies can do: Require algorithmic impact assessments before any AI surveillance deployment. Mandate sunset clauses and regular review. Ensure legal frameworks keep pace with surveillance capabilities.

  • Public service decisions automated without meaningful appeal processesWhat program administrators can do: Ensure all AI-influenced decisions in criminal justice, immigration, and enforcement include human review pathways. Individuals affected by AI-driven determinations must have access to explanation and appeal.


Protective Measures

Bias & fairness

Detection capabilities

Oversight & governance

  • Design human oversightHuman oversight design frameworks maintain meaningful human control over AI systems that affect individual liberty and community safety.
  • Red team policing AIRed teaming for AI systems probes law enforcement AI for bias, adversarial vulnerabilities, and failure modes. The AI red teaming guide provides structured methodologies.

Questions law enforcement leaders should ask

  • “Which AI systems in our department make or influence decisions about arrests, detention, bail, or sentencing?”
  • “What accuracy and bias auditing has been conducted on our facial recognition system, broken down by demographic group?”
  • “What appeal mechanisms exist for individuals affected by AI-assisted enforcement decisions?”
  • “How are we monitoring for AI-enabled criminal threats like deepfake fraud and AI-assisted attack planning?”

Regulatory Context

  • EU AI Act (entered into force August 2024, high-risk provisions apply from August 2026) — Prohibits real-time remote biometric identification in public spaces (with limited exceptions for serious crime). Classifies law enforcement AI, including predictive policing and risk assessment, as high-risk requiring conformity assessments, transparency, and human oversight.
  • NIST AI RMF (version 1.0, January 2023) — Provides AI risk management guidance applicable to law enforcement procurement and deployment decisions
  • ISO/IEC 42001 (published December 2023) — Offers an AI management system framework that law enforcement agencies can adopt for governance of AI tools

Law enforcement AI governance is evolving rapidly. Multiple US cities have banned or restricted facial recognition use by police. Biometric data laws (Illinois BIPA, GDPR Article 9) constrain biometric processing. The NIST Face Recognition Vendor Test (FRVT) provides independent accuracy benchmarks that have documented significant demographic performance disparities. Agencies should anticipate increasing requirements for transparency, bias auditing, and civilian oversight of policing AI.


Documented Incidents

Based on incident analysis, law enforcement and public safety incidents cluster around Discrimination & Social Harm (facial recognition bias, predictive policing disparities), Privacy & Surveillance (surveillance overreach and biometric exploitation), and Human-AI Control (automated decision failures in criminal justice).

Last updated: 2026-04-07 · Back to Sectors