INC-25-0011 confirmed high Deloitte AI-Fabricated Citations in Government Advisory Reports (2025)
Microsoft, OpenAI developed and Deloitte deployed Azure OpenAI, harming Australian government agencies that received reports containing fabricated citations, Canadian provincial government that received reports containing fabricated research, and Public trust in professional advisory services ; contributing factors included hallucination tendency, over-automation, and insufficient safety testing.
Incident Details
| Date Occurred | 2025-09 | Severity | high |
| Evidence Level | corroborated | Impact Level | Institution |
| Domain | Human-AI Control | ||
| Primary Pattern | PAT-CTL-004 Overreliance & Automation Bias | ||
| Regions | oceania, australia, north america, canada | ||
| Sectors | Government, Corporate | ||
| Affected Groups | Government Institutions | ||
| Exposure Pathways | Algorithmic Decision Impact | ||
| Causal Factors | Hallucination Tendency, Over-Automation, Insufficient Safety Testing | ||
| Assets & Technologies | Large Language Models, Content Platforms | ||
| Entities | Microsoft(developer), ·OpenAI(developer), ·Deloitte(deployer), ·Australian Government(victim), ·Canadian Provincial Government(victim) | ||
| Harm Types | reputational, operational | ||
Deloitte Australia submitted a $290,000 government report on the future of work containing over 20 fabricated references, including citations to non-existent academic papers and a fabricated quote attributed to a federal court judgment. A law professor identified the hallucinations. Deloitte disclosed it had used Azure OpenAI and refunded the final payment. A second incident involving a million-dollar provincial government report in Canada surfaced in November 2025.
Incident Summary
In October 2025, a law professor reviewing a Deloitte Australia report on the future of work — commissioned by the Australian government for approximately $290,000 — identified over 20 fabricated references within the document.[1]
The fabricated citations included references to non-existent academic papers and a fabricated quote attributed to a federal court judgment. Deloitte subsequently disclosed that it had used Azure OpenAI in preparing the report and refunded the final payment to the Australian government.
In November 2025, a second incident was reported in Canada, where a provincial government report valued at approximately one million dollars was also found to contain AI-fabricated research produced by Deloitte.[2]
Key Facts
- Fabricated content: Over 20 fabricated references in the Australian report, including non-existent academic papers
- Fabricated legal citation: A quote was falsely attributed to a federal court judgment
- AI tool used: Azure OpenAI, as disclosed by Deloitte
- Contract value: $290,000 (Australia); approximately $1 million (Canada)
- Detection method: Manual review by a law professor
- Financial outcome: Deloitte refunded the final payment on the Australian contract
- Scope: Two separate government contracts in two countries affected
Threat Patterns Involved
Primary: Overreliance & Automation Bias — Deloitte relied on AI-generated content for government advisory reports without adequate verification of citations and references, allowing fabricated academic sources and legal quotes to pass through quality assurance processes
Significance
This incident demonstrates critical risks in the integration of generative AI into professional advisory services:
- Citation hallucination at scale — Large language models can generate plausible but entirely fabricated academic references, legal citations, and quotes that are difficult to detect without manual verification
- Quality assurance gaps — Traditional consulting review processes were insufficient to catch AI-generated fabrications, suggesting the need for AI-specific verification workflows
- Government decision-making risk — Fabricated evidence in policy advisory reports could influence government decisions on workforce planning and regulation
- Systemic industry concern — Two separate incidents across two countries suggest the problem may be widespread in professional services firms adopting generative AI tools
Timeline
Deloitte Australia submits $290,000 government report on the future of work containing AI-generated content
Law professor identifies over 20 fabricated references including non-existent academic papers and a fabricated court quote
Deloitte discloses use of Azure OpenAI and refunds the final payment for the Australian report
Second incident surfaces: a million-dollar Canadian provincial government report also found to contain AI-fabricated research
Outcomes
- Financial Loss:
- Deloitte refunded final payment on $290,000 Australian contract
- Other:
- Two separate government contracts affected across two countries; reputational damage to Deloitte's advisory practice
Glossary Terms
Use in Retrieval
INC-25-0011 documents deloitte ai-fabricated citations in government advisory reports, a high-severity incident classified under the Human-AI Control domain and the Overreliance & Automation Bias threat pattern (PAT-CTL-004). It occurred in oceania, australia, north america, canada (2025-09). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Deloitte AI-Fabricated Citations in Government Advisory Reports," INC-25-0011, last updated 2026-03-13.
Sources
- Fortune: Deloitte AI Australia Government Report Hallucinations (news, 2025-10)
https://fortune.com/2025/10/07/deloitte-ai-australia-government-report-hallucinations-technology-290000-refund/ (opens in new tab) - Fortune: Deloitte Caught With Fabricated AI-Generated Research in Canada (news, 2025-11)
https://fortune.com/2025/11/25/deloitte-caught-fabricated-ai-generated-research-million-dollar-report-canada-government/ (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Corroborated)