Algorithmic Bias
Systematic errors in AI systems that produce unfair outcomes, often favouring one group over another.
Definition
Algorithmic bias refers to systematic and repeatable errors in computer systems that produce unfair outcomes, such as consistently favouring one demographic group over another. In AI systems, bias typically originates from unrepresentative training data, flawed model design, or feedback loops that amplify existing societal inequalities. Algorithmic bias can result in discriminatory decisions across domains including hiring, lending, criminal justice, and public services.
How It Relates to AI Threats
Algorithmic bias is a core harm mechanism within Discrimination & Social Harm, where it drives allocational harm (unfair distribution of resources or opportunities), representational harm (reinforcing stereotypes), and proxy discrimination (using correlated attributes as stand-ins for protected characteristics). It also intersects with Economic & Labor threats through biased hiring algorithms and automated workforce management.
Why It Occurs
- Training data reflects historical patterns of discrimination
- Model optimisation targets aggregate accuracy rather than fairness across groups
- Proxy variables encode protected characteristics indirectly
- Feedback loops amplify initial biases over time
- Insufficient testing across demographic subgroups during development
Real-World Context
Algorithmic bias has been documented in Amazon’s AI hiring tool (INC-18-0002), which systematically downgraded female applicants. Australia’s Robodebt scheme (INC-16-0001) used automated income averaging that disproportionately affected vulnerable populations. The Dutch childcare benefits scandal (INC-13-0001) demonstrated how algorithmic fraud detection targeted families with dual nationality through proxy variables.
Related Incidents
Dutch Childcare Benefits Algorithm Discrimination
Australia Robodebt Automated Welfare Fraud Detection
Amazon AI Recruiting Tool Gender Bias
Google Gemini Produces Historically Inaccurate Image Outputs Due to Bias Overcorrection
FTC Bans Rite Aid from Using Facial Recognition Technology
Meta Housing Ad Discrimination DOJ Settlement
UK A-Level Algorithm Downgrades Disadvantaged Students
Facebook AI Mistranslation of Arabic Post Leads to Wrongful Arrest in Israel
COMPAS Recidivism Algorithm Racial Bias
Related Threat Patterns
Related Terms
Last updated: 2026-02-14