INC-23-0005 confirmed high Systemic Risk AI-Fabricated Legal Citations in U.S. Courts (2023)
OpenAI, Anthropic developed and Attorneys using AI for legal research without verification deployed ChatGPT, Claude, and other LLM-based legal research tools, harming Litigants whose cases were compromised by fabricated citations and U.S. federal and state court systems ; contributing factors included hallucination tendency and over-automation.
Incident Details
| Date Occurred | 2023-05 | Severity | high |
| Evidence Level | primary | Impact Level | Sector |
| Failure Stage | Systemic Risk | ||
| Domain | Information Integrity | ||
| Primary Pattern | PAT-INF-004 Misinformation & Hallucinated Content | ||
| Secondary Patterns | PAT-CTL-004 Overreliance & Automation Bias | ||
| Regions | north america | ||
| Sectors | Legal | ||
| Affected Groups | General Public | ||
| Exposure Pathways | Direct Interaction | ||
| Causal Factors | Hallucination Tendency, Over-Automation | ||
| Assets & Technologies | Large Language Models | ||
| Entities | OpenAI(developer), ·Anthropic(developer), ·Attorneys using AI for legal research without verification(deployer) | ||
| Harm Types | reputational, operational | ||
From 2023 to 2025, U.S. federal and state courts sanctioned attorneys in over a dozen cases for submitting briefs containing nonexistent case citations generated by AI tools including ChatGPT and Claude. Beginning with Mata v. Avianca (S.D.N.Y., June 2023), the pattern expanded to include Lacey v. State Farm, Wadsworth v. Walmart, Johnson v. Dunn, and others. Sanctions ranged from $2,000 fines to default judgment against a client. By late 2025, an estimated 1,000+ cases involving AI-fabricated citations had been identified nationwide, prompting the ABA to issue its first ethics opinion on generative AI and multiple courts to adopt mandatory AI disclosure requirements.
Incident Summary
Beginning with the landmark case of Mata v. Avianca, Inc. in June 2023, U.S. federal and state courts have sanctioned attorneys in over a dozen cases for submitting briefs containing nonexistent case citations generated by AI tools.[1]
In the original case, attorney Steven Schwartz used ChatGPT to research legal precedents and filed a brief citing six entirely fabricated cases in the Southern District of New York. Judge P. Kevin Castel imposed a $5,000 fine, finding that Schwartz had acted in “subjective bad faith.” The pattern has since expanded significantly across multiple jurisdictions and law firm sizes.[2]
Notable subsequent cases include Lacey v. State Farm (C.D. Cal., May 2025), where Ellis George Cipollone O’Brien Annaguey LLP and K&L Gates — the 14th-largest U.S. law firm — were jointly sanctioned $31,100 after an attorney sent AI-generated citations to K&L Gates colleagues who incorporated them into a brief without verification. In Johnson v. Dunn (N.D. Ala., July 2025), attorneys from Butler Snow LLP were publicly reprimanded and disqualified from the case, with the court explicitly stating that monetary sanctions were proving “ineffective at deterring” AI fabrications. By late 2025, a tracking database had identified over 1,000 cases involving AI-fabricated citations nationwide.[3]
Key Facts
- Scope: Over 1,000 cases identified involving AI-fabricated legal citations across U.S. courts (2023–2025)
- AI tools involved: ChatGPT (OpenAI), Claude (Anthropic), and in-house AI legal research platforms
- Sanctions range: $2,000 (Gauthier v. Goodyear) to default judgment against a client (Flycatcher v. Affable Avenue)
- Firms sanctioned include: K&L Gates (~1,700 attorneys), Morgan & Morgan (42nd-largest U.S. firm), Butler Snow (350+ attorneys)
- Regulatory response: ABA Formal Opinion 512 (July 2024); multiple federal and state courts adopted mandatory AI disclosure requirements
- Escalation: Courts have moved from monetary fines toward disqualification, bar referrals, and default judgment as monetary sanctions proved insufficient
Threat Patterns Involved
Primary: Misinformation and Hallucinated Content — LLMs generate structurally convincing but entirely fictitious legal case citations, complete with docket numbers, judicial names, and fabricated legal reasoning.
Secondary: Overreliance and Automation Bias — Attorneys across firms of all sizes accepted AI-generated citations at face value without applying standard professional verification procedures.
Significance
- Systemic pattern, not isolated incident — What began as a single embarrassing case has grown into a sector-wide problem affecting law firms of all sizes, from solo practitioners to the 14th-largest firm in the United States, with over 1,000 cases identified by late 2025
- Escalating judicial response — Courts have progressively increased the severity of sanctions from monetary fines to public reprimand, disqualification, bar referrals, and default judgment, reflecting recognition that financial penalties alone are insufficient to deter the practice
- Professional infrastructure changes — The ABA’s Formal Opinion 512 (July 2024) established the first professional ethics framework for generative AI use in legal practice, and multiple jurisdictions now require mandatory AI disclosure in court filings
- Cross-model vulnerability — The problem is not limited to a single AI provider; cases have involved ChatGPT, Claude, and proprietary in-house tools, indicating that hallucinated citations are a structural property of current LLMs rather than a vendor-specific issue
Timeline
Attorney Steven Schwartz files brief in Mata v. Avianca citing six ChatGPT-fabricated cases in S.D.N.Y.
Judge P. Kevin Castel imposes $5,000 fine on Schwartz and colleague; first major AI hallucination sanctions
ABA issues Formal Opinion 512, its first ethics guidance on generative AI in legal practice
Gauthier v. Goodyear (E.D. Tex.): attorney sanctioned $2,000 for Claude-generated fabricated citations
Wadsworth v. Walmart (D. Wyo.): Morgan & Morgan attorneys fined $5,000 for citations from in-house AI tool
Fletcher v. Experian (5th Cir.): $2,500 sanction; published opinion warns lawyers remain responsible for every citation
Lacey v. State Farm (C.D. Cal.): Ellis George and K&L Gates sanctioned $31,100 for undisclosed AI-generated citations
Johnson v. Dunn (N.D. Ala.): Butler Snow attorneys publicly reprimanded and disqualified; court notes monetary sanctions proving ineffective
California appellate court fines attorney $10,000 after 21 of 23 quotations in brief were ChatGPT fabrications
Outcomes
- Regulatory Action:
- Sanctions imposed in over a dozen federal and state court cases; ABA Formal Opinion 512 issued July 2024; multiple courts adopted mandatory AI disclosure requirements
Glossary Terms
Use in Retrieval
INC-23-0005 documents ai-fabricated legal citations in u.s. courts, a high-severity incident classified under the Information Integrity domain and the Misinformation & Hallucinated Content threat pattern (PAT-INF-004). It occurred in north america (2023-05). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "AI-Fabricated Legal Citations in U.S. Courts," INC-23-0005, last updated 2026-03-13.
Sources
- Mata v. Avianca, Inc. — Court Order on Sanctions (primary, 2023-06)
https://law.justia.com/cases/federal/district-courts/new-york/nysdce/1:2022cv01461/575368/54/ (opens in new tab) - JD Supra: Lawyers Continue to Get in Hot Water for Citing AI Hallucinated Cases (news, 2025-05)
https://www.jdsupra.com/legalnews/lawyers-continue-to-get-in-hot-water-9433649/ (opens in new tab) - JD Supra: Fifth Circuit Explains How (Not) to Use AI in Briefing (news, 2025-02)
https://www.jdsupra.com/legalnews/fifth-circuit-explains-how-not-to-use-8303604/ (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)
- — Upgraded from single-incident (Mata v. Avianca) to systemic_risk covering 2023-2025 pattern of AI-fabricated citations across U.S. courts; added Lacey v. State Farm, Wadsworth v. Walmart, Johnson v. Dunn, and other cases; expanded timeline, sources, and significance