INC-18-0002 confirmed high Amazon AI Recruiting Tool Gender Bias (2018)
Amazon developed and deployed Amazon AI recruiting tool, harming Female job applicants and Women in the technology sector ; contributing factors included training data bias, model opacity, and insufficient safety testing.
Incident Details
| Date Occurred | 2018-10 | Severity | high |
| Evidence Level | primary | Impact Level | Organization |
| Domain | Discrimination & Social Harm | ||
| Primary Pattern | PAT-SOC-002 Allocational Harm | ||
| Secondary Patterns | PAT-SOC-003 Data Imbalance Bias | ||
| Regions | north america | ||
| Sectors | Employment | ||
| Affected Groups | Workers | ||
| Exposure Pathways | Algorithmic Decision Impact | ||
| Causal Factors | Training Data Bias, Model Opacity, Insufficient Safety Testing | ||
| Assets & Technologies | Decision Automation, Training Datasets | ||
| Entities | Amazon(developer, deployer) | ||
| Harm Types | rights violation, financial | ||
Amazon's internal AI recruiting tool was found to systematically penalize resumes containing references to women, reflecting gender bias learned from historically male-dominated hiring data.
Incident Summary
Between 2014 and 2017, Amazon developed an experimental AI-powered recruiting tool designed to automate the screening of job applicants’ resumes.[1] The system was trained on patterns derived from resumes submitted to the company over a ten-year period. Because the technology industry has been historically male-dominated, the training data reflected a disproportionate number of male applicants, particularly for technical roles.
The AI system learned to penalize resumes that contained the word “women’s,” such as references to “women’s chess club captain” or “women’s studies.” It also downgraded candidates who had attended all-women’s colleges.[1] These patterns emerged because the model optimized for characteristics associated with previously successful (predominantly male) applicants.
Amazon’s internal teams identified the bias and attempted to correct it, but concluded that there was no guarantee the system would not find other ways to discriminate. The company disbanded the development team in 2017 and scrapped the tool. The incident became public when Reuters published an investigation in October 2018.
Key Facts
- Method: AI resume-screening tool trained on historical hiring data
- Bias identified: Systematic penalization of resumes referencing women or women’s institutions
- Training data: 10 years of predominantly male applicant resumes
- Duration of development: Approximately 2014 to 2017
- Outcome: Tool was never deployed as a sole decision-maker but was used experimentally
- Disclosure: Internal discovery followed by public reporting via Reuters
Threat Patterns Involved
Primary: Allocational Harm — The AI system created disparate outcomes in resource allocation (employment opportunities) based on gender-correlated features in resumes.
Secondary: Data Imbalance Bias — The tool’s discriminatory behavior originated from training data that reflected historical gender imbalances in technology sector hiring.
Significance
- Bias amplification through historical data. The incident demonstrated that AI systems trained on biased historical data will reproduce and potentially amplify those biases, even when discrimination is not an intended feature of the system.
- Difficulty of bias remediation. Amazon’s inability to correct the bias after identification highlights the challenge of debiasing machine learning models once discriminatory patterns have been learned.
- Transparency and accountability gaps. The tool operated internally for several years before the bias was publicly disclosed, raising questions about the obligation of companies to audit and report on AI-driven decision-making systems.
- Regulatory catalyst. The incident has been widely cited in policy discussions around AI fairness and contributed to subsequent regulatory frameworks addressing automated hiring tools, including New York City’s Local Law 144.
Timeline
Amazon begins developing AI-powered recruiting tool to automate resume screening
Internal teams discover the tool penalizes resumes containing the word 'women's'
Amazon disbands the team working on the recruiting tool after failing to correct bias
Reuters publishes investigation detailing the tool's discriminatory behavior
Outcomes
- Financial Loss:
- Not publicly disclosed
- Arrests:
- None
- Recovery:
- Not applicable
- Regulatory Action:
- Tool scrapped after internal discovery of bias
Glossary Terms
Use in Retrieval
INC-18-0002 documents amazon ai recruiting tool gender bias, a high-severity incident classified under the Discrimination & Social Harm domain and the Allocational Harm threat pattern (PAT-SOC-002). It occurred in north america (2018-10). This page is maintained by TopAIThreats.com as part of an evidence-based registry of AI-enabled threats. Cite as: TopAIThreats.com, "Amazon AI Recruiting Tool Gender Bias," INC-18-0002, last updated 2025-01-15.
Sources
- Reuters: Amazon scraps secret AI recruiting tool that showed bias against women (primary, 2018-10)
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G (opens in new tab)
Update Log
- — First logged (Status: Confirmed, Evidence: Primary)