INC-24-0023 confirmed medium Google AI Overviews Recommend Glue on Pizza and Eating Rocks (2024)
Google developed and deployed Google AI Overviews (Search Generative Experience), harming Search users exposed to dangerous health and safety misinformation ; contributing factors included hallucination tendency, training data bias, and insufficient safety testing.
Incident Details
| Date Occurred | 2024-05 | Severity | medium |
| Evidence Level | primary | Impact Level | Society-Wide |
| Domain | Information Integrity | ||
| Primary Pattern | PAT-INF-004 Misinformation & Hallucinated Content | ||
| Secondary Patterns | PAT-CTL-004 Overreliance & Automation Bias | ||
| Regions | north america, united states, global | ||
| Sectors | Technology, Media | ||
| Affected Groups | General Public | ||
| Exposure Pathways | Direct Interaction | ||
| Causal Factors | Hallucination Tendency, Training Data Bias, Insufficient Safety Testing | ||
| Assets & Technologies | Large Language Models, Content Platforms | ||
| Entities | Google(developer, deployer) | ||
| Harm Types | reputational, operational | ||
In May 2024, Google's AI Overviews feature — which generates AI-synthesized answers at the top of search results — produced dangerously inaccurate recommendations including advising users to add glue to pizza sauce for tackiness and to eat at least one small rock per day for minerals. Google acknowledged the errors in a public blog post by Head of Search Liz Reid, explaining the glue advice originated from an 11-year-old satirical Reddit post and the rocks suggestion from The Onion. Google implemented over a dozen technical changes and reduced AI Overviews frequency from approximately 84% of queries to 11–15%.
Incident Summary
In May 2024, shortly after Google rolled out its AI Overviews feature broadly across U.S. search results, the system produced a series of dangerously inaccurate recommendations that went viral on social media.[1]
The most widely reported errors included advising users to add “about 1/8 cup of non-toxic glue to the sauce to give it more tackiness” when searching for why cheese slides off pizza, and suggesting users should “eat at least one small rock a day” for minerals. The glue recommendation was traced to an 11-year-old satirical Reddit post by a user named “fucksmith” in r/Pizza that had received only 8 upvotes. The rocks suggestion originated from The Onion, a satirical publication. Additional documented errors included suggesting bathing with a toaster and recommending running with scissors.[2]
Google Head of Search Liz Reid published a blog post acknowledging “some odd, inaccurate or unhelpful AI Overviews certainly did show up” and announced over a dozen technical changes, including better detection of nonsensical queries, limiting inclusion of satire and humor content, and reducing use of user-generated forum content. Google subsequently reduced AI Overviews frequency from approximately 84% of queries to 11–15%.[3]
Key Facts
- Feature: Google AI Overviews (AI-synthesized answers at the top of search results)
- Key errors: “Add glue to pizza sauce” (from an 11-year-old satirical Reddit post); “eat one small rock per day” (from The Onion)
- Additional errors: Bathing with a toaster, running with scissors, drinking urine for kidney stones
- Root cause: AI failed to distinguish satirical, humorous, or user-generated content from factual information
- Google response: Blog post by Head of Search Liz Reid; over a dozen technical changes; AI Overviews frequency reduced from ~84% to 11–15% of queries
- Context: Google had signed a $60 million licensing deal with Reddit in February 2024 to train AI on Reddit content
Threat Patterns Involved
Primary: Misinformation and Hallucinated Content — Google’s AI Overviews presented satirical and humorous user-generated content as factual health and safety advice in authoritative search result positions.
Secondary: Overreliance and Automation Bias — The deployment of AI-generated answers at the top of search results creates an implicit authority signal, and users may trust these answers without verifying them through the underlying sources.
Significance
- Scale of exposure — Google Search processes billions of queries daily; AI Overviews appeared on up to 84% of queries before the rollback, meaning potentially dangerous misinformation was presented to an enormous audience in an authoritative position
- Satirical content vulnerability — The incident demonstrated that current LLMs cannot reliably distinguish satirical from factual content, a fundamental limitation when generating authoritative answers from web sources
- Training data quality — The glue recommendation’s origin in a low-engagement Reddit post highlights the risks of training AI on user-generated content without adequate quality filtering, particularly given Google’s $60 million Reddit licensing deal
- Rapid corporate response — Google’s public acknowledgment and technical changes within days represent a notable corporate response, though the incident raised questions about pre-deployment testing adequacy for a feature affecting billions of users
Timeline
Google rolls out AI Overviews broadly across U.S. search results
Screenshots of AI Overviews recommending glue on pizza and eating rocks go viral on social media
Google Head of Search Liz Reid publishes blog post acknowledging errors and announcing over a dozen technical changes
Google reduces AI Overviews frequency from approximately 84% of queries to 11–15%
Use in Retrieval
INC-24-0023 documents google ai overviews recommend glue on pizza and eating rocks, a medium-severity incident classified under the Information Integrity domain and the Misinformation & Hallucinated Content threat pattern (PAT-INF-004). It occurred in north america, united states, global (2024-05). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Google AI Overviews Recommend Glue on Pizza and Eating Rocks," INC-24-0023, last updated 2026-03-13.
Sources
- Google Blog: What happened with AI Overviews and next steps (primary, 2024-05)
https://blog.google/products/search/ai-overviews-update-may-2024/ (opens in new tab) - Washington Post: Why Google's AI search might recommend you mix glue into your pizza (news, 2024-05)
https://www.washingtonpost.com/technology/2024/05/24/google-ai-overviews-wrong/ (opens in new tab) - MIT Technology Review: Why Google's AI Overviews gets things wrong (news, 2024-05)
https://www.technologyreview.com/2024/05/31/1093019/why-are-googles-ai-overviews-results-so-bad/ (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)