Automated Decision-Making
Using algorithms or AI to make decisions affecting individuals with limited human review.
Definition
Automated decision-making (ADM) refers to the use of algorithms, rule-based systems, or AI models to make or substantially inform decisions that affect individuals — including determinations about welfare eligibility, hiring, credit scoring, criminal sentencing, and public service allocation. ADM systems range from fully automated processes with no human involvement to semi-automated workflows where algorithmic outputs guide a human decision-maker. The concept is central to data protection law, where regulations such as the GDPR establish rights related to solely automated decisions with legal or significant effects.
How It Relates to AI Threats
Automated decision-making sits at the intersection of Human-AI Control and Discrimination & Social Harm. Within Human-AI Control, ADM raises concerns about overreliance on algorithmic outputs (automation bias) and the adequacy of human-in-the-loop safeguards — particularly when reviewers lack the time, training, or information to meaningfully override system recommendations. Within Discrimination & Social Harm, ADM can produce allocational harm when biased models systematically deny resources or opportunities to certain groups. The opacity of many ADM systems further compounds these risks by limiting individuals’ ability to understand or contest decisions made about them.
Why It Occurs
- Organisations adopt ADM to reduce costs, increase processing speed, and impose consistency in high-volume decision environments
- The perceived objectivity of algorithmic systems can lead to reduced scrutiny compared with human decision-makers
- Regulatory frameworks in many jurisdictions have not kept pace with the deployment of AI in consequential decision-making
- Human reviewers in semi-automated systems frequently default to accepting algorithmic recommendations
- Transparency requirements are often limited or poorly enforced, leaving affected individuals without meaningful recourse
Real-World Context
The Dutch childcare benefits scandal (INC-13-0001) is among the most extensively documented cases of ADM failure, in which an automated fraud detection system wrongly flagged thousands of families — disproportionately those with dual nationality — for benefit repayment. The resulting harm included financial ruin, family separations, and the eventual resignation of the Dutch government. The case prompted significant regulatory discussion across the EU regarding the need for human oversight provisions and algorithmic impact assessments in public-sector ADM deployments.
Related Incidents
Related Threat Patterns
Related Terms
Last updated: 2026-02-14