INC-23-0013 confirmed high FTC Bans Rite Aid from Using Facial Recognition Technology (2023)
Unknown facial recognition vendors developed and Rite Aid deployed biometric data, harming Rite Aid customers, Women, People of color, and Wrongfully accused individuals ; contributing factors included training data bias, misconfigured deployment, and insufficient safety testing.
Incident Details
| Date Occurred | 2023-12 | Severity | high |
| Evidence Level | primary | Impact Level | Organization |
| Domain | Privacy & Surveillance | ||
| Primary Pattern | PAT-PRI-002 Biometric Exploitation | ||
| Secondary Patterns | PAT-SOC-003 Data Imbalance Bias | ||
| Regions | north america | ||
| Sectors | Corporate | ||
| Affected Groups | General Public, Vulnerable Communities | ||
| Exposure Pathways | Algorithmic Decision Impact | ||
| Causal Factors | Training Data Bias, Misconfigured Deployment, Insufficient Safety Testing | ||
| Assets & Technologies | Biometric Data | ||
| Entities | Unknown facial recognition vendors(developer), ·Rite Aid(deployer) | ||
| Harm Types | rights violation, psychological, reputational | ||
The FTC banned Rite Aid from using facial recognition technology for five years after finding its system produced false-positive matches that disproportionately affected women and people of color, leading to wrongful accusations.
Incident Summary
On December 19, 2023, the U.S. Federal Trade Commission (FTC) issued a consent order banning Rite Aid Corporation from using facial recognition technology for surveillance purposes for a period of five years.[1] The FTC’s complaint found that Rite Aid had deployed AI-powered facial recognition systems in hundreds of its retail stores over approximately eight years, beginning in 2012, to identify customers the system flagged as likely shoplifters based on prior images.[1][2]
The FTC determined that the technology disproportionately generated false-positive matches for women and people of color, resulting in employees confronting, following, searching, falsely accusing, and in some cases calling police on innocent customers who had been wrongly flagged by the system.[1][3] The complaint also found that Rite Aid failed to implement reasonable safeguards, including neglecting to test the system for accuracy or bias, failing to train employees on how to handle alerts, and not providing a mechanism for individuals to contest false matches.[2]
A 2020 Reuters investigation had previously revealed that the facial recognition systems were disproportionately deployed in stores located in lower-income communities and communities of color, raising additional concerns about discriminatory targeting.[4]
Key Facts
- Regulatory authority: U.S. Federal Trade Commission
- Duration of deployment: Approximately 2012–2020 (approximately eight years)
- Technology used: AI-powered facial recognition systems from multiple vendors
- Bias finding: Disproportionate false-positive rates for women and people of color
- Deployment pattern: Systems concentrated in stores in lower-income and non-white neighborhoods
- Enforcement: Five-year ban on facial recognition use; mandatory deletion of collected data; required implementation of comprehensive data security program
- Safeguard failures: No accuracy or bias testing, inadequate employee training, no mechanism for contesting false matches
Threat Patterns Involved
Primary: Biometric Exploitation — Rite Aid deployed facial recognition technology to identify and track customers using biometric data without adequate safeguards, consent mechanisms, or accuracy verification, constituting a misuse of biometric identification systems.
Secondary: Data Imbalance Bias — The facial recognition system exhibited systematic bias, generating disproportionately higher false-positive rates for women and people of color, a pattern consistent with well-documented demographic performance disparities in facial recognition algorithms trained on imbalanced datasets.
Significance
- First FTC ban on commercial facial recognition. The Rite Aid consent order represented one of the first instances in which the FTC banned a company from using facial recognition technology, establishing a regulatory precedent for enforcement against biased biometric surveillance in commercial settings.
- Documented discriminatory impact. The case provided concrete evidence that facial recognition systems deployed without adequate testing disproportionately harm women and people of color, reinforcing findings from academic studies such as the NIST Face Recognition Vendor Test.
- Geographic targeting as compounding harm. The disproportionate deployment of the technology in lower-income and predominantly non-white neighborhoods compounded the discriminatory impact, demonstrating how deployment decisions can amplify algorithmic bias.
- Accountability gap in retail AI adoption. The eight-year deployment period without systematic accuracy testing or bias auditing illustrated a broader pattern of retailers adopting AI surveillance technologies without implementing basic safeguards or accountability mechanisms.
Timeline
Rite Aid begins deploying facial recognition technology in select stores
Reuters investigation reveals Rite Aid's use of facial recognition systems in hundreds of stores, disproportionately located in lower-income and non-white neighborhoods
Rite Aid discontinues use of facial recognition systems following public scrutiny
FTC issues consent order banning Rite Aid from using facial recognition technology for five years and requiring deletion of collected data
Outcomes
- Financial Loss:
- Not publicly quantified; potential losses from false-positive confrontations and reputational damage
- Arrests:
- None
- Recovery:
- Rite Aid required to delete all images and data collected through facial recognition systems
- Regulatory Action:
- Five-year ban on facial recognition use; requirement to implement comprehensive data security program; obligation to delete collected biometric data
Glossary Terms
Use in Retrieval
INC-23-0013 documents ftc bans rite aid from using facial recognition technology, a high-severity incident classified under the Privacy & Surveillance domain and the Biometric Exploitation threat pattern (PAT-PRI-002). It occurred in north america (2023-12). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "FTC Bans Rite Aid from Using Facial Recognition Technology," INC-23-0013, last updated 2026-02-15.
Sources
- Federal Trade Commission: Rite Aid Banned from Using AI Facial Recognition After FTC Says Retailer Deployed Technology without Reasonable Safeguards (primary, 2023-12)
https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without (opens in new tab) - Federal Trade Commission: Complaint and Consent Order, In the Matter of Rite Aid Corporation (primary, 2023-12)
https://www.ftc.gov/legal-library/browse/cases-proceedings/2023190 (opens in new tab) - Reuters: US FTC Bans Rite Aid from Using Facial Recognition Technology for Five Years (news, 2023-12)
https://www.reuters.com/technology/us-ftc-bans-rite-aid-using-facial-recognition-technology-five-years-2023-12-19/ (opens in new tab) - The Washington Post: Rite Aid Used Facial Recognition in Stores for Years. Now It's Banned. (news, 2023-12)
https://www.washingtonpost.com/technology/2023/12/19/rite-aid-ftc-facial-recognition-ban/ (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)