INC-20-0001 confirmed critical Systemic Risk Clearview AI Mass Facial Recognition Scraping (2020)
Clearview AI developed and Clearview AI, Law enforcement agencies worldwide deployed biometric data, training datasets, and content platforms, harming General public, Social media users, and Individuals misidentified by the system ; contributing factors included regulatory gap and accountability vacuum.
Incident Details
| Date Occurred | 2020-01 | Severity | critical |
| Evidence Level | primary | Impact Level | Society-Wide |
| Failure Stage | Systemic Risk | ||
| Domain | Privacy & Surveillance | ||
| Primary Pattern | PAT-PRI-003 Mass Surveillance Amplification | ||
| Secondary Patterns | PAT-PRI-002 Biometric Exploitation | ||
| Regions | north america, europe | ||
| Sectors | Government, Law Enforcement | ||
| Affected Groups | General Public, Government Institutions | ||
| Exposure Pathways | Algorithmic Decision Impact | ||
| Causal Factors | Regulatory Gap, Accountability Vacuum | ||
| Assets & Technologies | Biometric Data, Training Datasets, Content Platforms | ||
| Entities | Clearview AI(developer, deployer), ·Law enforcement agencies worldwide(deployer) | ||
| Harm Types | rights violation, psychological | ||
Clearview AI scraped billions of facial images from social media platforms without consent to build a facial recognition database used by law enforcement agencies worldwide, raising mass surveillance concerns.
Incident Summary
Clearview AI, a facial recognition technology company, built one of the world’s largest facial image databases by scraping publicly available photographs from social media platforms, news sites, and other online sources without the knowledge or consent of the individuals depicted. The company’s existence and practices were revealed by a New York Times investigation published in January 2020, which reported that the database contained over 3 billion images at the time.[1]
The company marketed its facial recognition service primarily to law enforcement agencies, enabling them to identify individuals by uploading a photograph and receiving matches from the scraped database. By 2022, the database had reportedly grown to over 30 billion images. The practice of mass biometric data collection without consent drew immediate legal and regulatory scrutiny across multiple jurisdictions.
Data protection authorities in Italy, France, the United Kingdom, Australia, and other countries found that Clearview AI violated privacy laws by collecting and processing biometric data without a legal basis or adequate consent.[2] Combined regulatory fines exceeded EUR 50 million, and several jurisdictions ordered the company to delete data belonging to their residents. The case has become a landmark reference point for the legal boundaries of facial recognition technology and mass biometric data collection.
Key Facts
- Method: Automated scraping of facial images from publicly accessible websites and social media platforms
- Database size: Over 30 billion images as of 2022
- Clients: Primarily law enforcement agencies in the United States and other countries
- Legal findings: Violations of GDPR, Australian Privacy Act, and other data protection laws
- Regulatory fines: Over EUR 50 million combined across multiple jurisdictions
- Orders: Multiple data protection authorities ordered deletion of data and cessation of processing
Threat Patterns Involved
Primary: Mass Surveillance Amplification — Clearview AI’s technology enabled the creation of a searchable biometric surveillance infrastructure covering billions of individuals, fundamentally altering the scale at which facial recognition could be deployed.
Secondary: Biometric Exploitation — The collection and commercial use of biometric data (facial images) without consent constituted a direct exploitation of individuals’ biometric identifiers for surveillance purposes.
Significance
- Scale of unconsented biometric data collection. The database’s growth from 3 billion to over 30 billion images demonstrated the ease with which biometric data can be harvested from publicly available online sources at unprecedented scale.
- Cross-jurisdictional enforcement challenge. Despite regulatory fines and orders across multiple countries, enforcement has proven difficult against a company operating primarily from the United States, highlighting gaps in international data protection enforcement.
- Precedent for biometric privacy regulation. The case has directly influenced the development of biometric data protection laws and facial recognition regulations in multiple jurisdictions.
- Consent and public space. The incident raised fundamental questions about whether individuals have a reasonable expectation that publicly shared photographs will not be aggregated into a mass surveillance database.
Timeline
Clearview AI begins scraping publicly available images from social media platforms and websites
The New York Times publishes investigation revealing Clearview AI's facial recognition database of over 3 billion images
Multiple social media platforms issue cease-and-desist letters to Clearview AI
Australian Information Commissioner finds Clearview AI breached the Privacy Act
Italian data protection authority (Garante) fines Clearview AI EUR 20 million
UK Information Commissioner's Office fines Clearview AI GBP 7.5 million
French CNIL fines Clearview AI EUR 20 million
Clearview AI reports database has grown to over 30 billion images
Outcomes
- Financial Loss:
- Not applicable
- Arrests:
- None
- Recovery:
- Not applicable
- Regulatory Action:
- Multiple GDPR fines totaling over EUR 50 million; banned in several jurisdictions
Glossary Terms
Use in Retrieval
INC-20-0001 documents clearview ai mass facial recognition scraping, a critical-severity incident classified under the Privacy & Surveillance domain and the Mass Surveillance Amplification threat pattern (PAT-PRI-003). It occurred in north america, europe (2020-01). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Clearview AI Mass Facial Recognition Scraping," INC-20-0001, last updated 2025-01-15.
Sources
- The New York Times: The Secretive Company That Might End Privacy as We Know It (news, 2020-01)
https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html (opens in new tab) - EDPB: Facial recognition - several national data protection authorities investigate Clearview AI (primary, 2022-03)
https://edpb.europa.eu/news/news/2022/facial-recognition-several-national-data-protection-authorities-investigate-clearview_en (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)