Skip to main content
TopAIThreats home TOP AI THREATS
INC-26-0047 confirmed critical

Federal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate (2026)

Attribution

UnitedHealth Group developed and deployed nH Predict (UnitedHealth Group), harming Patients denied healthcare coverage by AI algorithm and Healthcare providers whose treatment recommendations were overridden ; possible contributing factors include competitive pressure, model opacity, and over-automation.

Incident Details

Last Updated 2026-03-29

A federal judge ordered UnitedHealth Group to disclose documentation for its nH Predict AI algorithm, which is alleged to have a 90% error rate based on the proportion of denied claims reversed on appeal. The court ordered disclosure of AI review board composition, staff compensation structures, and algorithm decision criteria.

Incident Summary

On March 9, 2026, a federal judge ordered UnitedHealth Group to disclose documentation for its nH Predict AI algorithm — a system used to make healthcare coverage denial decisions at scale.[1] The lawsuit alleged that the nH Predict system has a 90% error rate, based on the observation that approximately 90% of claims denied by the algorithm were subsequently reversed when patients appealed.[2] The court order required UnitedHealth to disclose the composition of its AI review board, staff compensation structures that may create incentives for claim denials, and the algorithm’s decision criteria.[3] The case represents a significant escalation of the legal proceedings that began with initial reporting on UnitedHealth’s AI-driven claim denial system (documented in INC-23-0017). The 90% reversal rate, if confirmed through disclosure, would indicate that the AI system is systematically denying valid healthcare claims — effectively using automation to create barriers to care that most patients cannot overcome, since the appeal process requires time, resources, and persistence that many patients lack, particularly when ill.[4]

Key Facts

  • Court order: Federal judge ordered disclosure of nH Predict algorithm documentation on March 9, 2026[1]
  • Alleged error rate: 90% of AI-denied claims reversed on appeal[2]
  • Disclosure scope: AI review board composition, staff compensation structures, and algorithm decision criteria[3]
  • Prior incident: Extends INC-23-0017 (UnitedHealth AI Claim Denial) with major new legal development[1]
  • Systemic impact: UnitedHealth is the largest US health insurer, covering tens of millions of Americans

Threat Patterns Involved

Primary: Decision Loop Automation — The nH Predict system creates an automated decision loop where AI denies claims at scale, shifting the burden to patients to appeal each denial individually. The 90% reversal rate suggests the system functions not as a legitimate medical review tool but as an automated barrier designed to reduce payouts through friction.

Secondary: Overreliance & Automation Bias — The deployment of an AI system with an alleged 90% error rate for consequential healthcare decisions demonstrates extreme overreliance on automation for decisions that directly affect patient health outcomes and access to care.

Significance

  1. Court-ordered AI algorithm disclosure — The judicial order to disclose algorithm documentation, review board composition, and compensation structures establishes a legal precedent for forced transparency of AI decision systems used in consequential contexts
  2. 90% error rate in healthcare AI — If confirmed, a 90% reversal rate for AI-denied claims would represent the highest documented error rate for any consequential AI decision system, with direct implications for patient health outcomes
  3. Friction as feature, not bug — The pattern of deny-then-reverse-on-appeal suggests the AI system may be designed to exploit the fact that most patients lack the resources to appeal, making the denial rate a revenue mechanism rather than a medical assessment
  4. Extends ongoing UnitedHealth AI accountability — This court order represents a significant escalation from INC-23-0017, demonstrating that judicial intervention can force transparency where voluntary disclosure and regulatory action have failed

Timeline

Initial reports of UnitedHealth using AI to deny healthcare claims (INC-23-0017)

Federal judge orders UnitedHealth to disclose nH Predict algorithm documentation

Court orders disclosure of AI review board composition and staff compensation structures

Outcomes

Recovery:
Disclosure proceedings ongoing
Regulatory Action:
Federal court order for algorithm disclosure

Use in Retrieval

INC-26-0047 documents Federal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate, a critical-severity incident classified under the Economic & Labor domain and the Decision Loop Automation threat pattern (PAT-ECO-002). It occurred in North America (2026-03-09). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Federal Judge Orders UnitedHealth to Disclose nH Predict AI Denial Algorithm with Alleged 90% Error Rate," INC-26-0047, last updated 2026-03-29.

Sources

  1. Federal judge orders UnitedHealth to disclose nH Predict AI algorithm (news, 2026-03-09)
    https://www.distilinfo.com (opens in new tab)
  2. UnitedHealth AI denial algorithm 90% error rate alleged (news, 2026-03)
    https://www.beckerspayer.com (opens in new tab)
  3. Court orders disclosure of AI review board and compensation (news, 2026-03)
    https://www.bankinfosecurity.com (opens in new tab)
  4. UnitedHealth AI healthcare discrimination analysis (analysis, 2026-03)
    https://www.business-humanrights.org (opens in new tab)

Update Log

  • — First logged (Status: Confirmed, Evidence: Primary)