Skip to main content
TopAIThreats home TOP AI THREATS
How-To Guide

How to Detect AI Bias: A Practitioner Checklist

Step-by-step workflow for auditing AI systems for discriminatory outcomes, including fairness metric selection, disaggregated evaluation, data auditing, and regulatory compliance guidance.

Last updated: 2026-03-21

Who this is for: ML engineers, product managers, compliance officers, and civil rights analysts responsible for evaluating AI systems for bias before deployment or during operation — particularly systems used in hiring, lending, housing, healthcare, education, or criminal justice.

What AI Bias Is and Why Auditing Matters

AI bias occurs when an AI system produces systematically different outcomes for different groups of people in ways that are unjust or discriminatory. Bias can emerge from training data that underrepresents or misrepresents specific populations, from features that serve as proxies for protected attributes, from modeling choices that optimize for the majority population, or from deployment contexts that differ from training conditions.

The consequences are well-documented:

Standard performance metrics (accuracy, F1, AUC) mask group-level disparities because they aggregate across the full population. Bias auditing disaggregates performance to reveal disparities that aggregate metrics conceal.

For the underlying science, see the AI Bias & Fairness Auditing Methods reference page.

Threat patterns this guide addresses

Step 1: Define the Audit Scope

Step 2: Select Appropriate Fairness Metrics

No single fairness metric is universally correct. Different metrics are appropriate for different contexts. These metrics are mathematically incompatible — you must choose which to prioritize.

For allocation decisions (hiring, lending, housing)

For risk scoring (recidivism, fraud, insurance)

For content and recommendations

Step 3: Collect and Prepare Data

Step 4: Run Quantitative Analysis

Compute fairness metrics

Use auditing tools

ToolApproachBest for
IBM AI Fairness 36070+ metrics, bias mitigation algorithmsComprehensive technical audit
Microsoft FairlearnFairness assessment + constrained optimizationPython-based ML pipelines
Google What-If ToolInteractive visualization of model behaviorExploratory analysis
AequitasGroup fairness audit with report generationPolicy-focused audits

Step 5: Audit the Data and Features

Quantitative disparities have root causes in data and features. Investigate.

Data audit

Feature audit

Step 6: Document and Decide

Where This Guide Fits in AI Threat Response

  • Auditing (this guide) — Is this system biased? Evaluate AI systems for discriminatory outcomes.
  • Auditing methodsHow does bias auditing work? Technical reference on fairness metrics, impossibility results, and tool comparisons.
  • Risk monitoringIs bias emerging over time? Continuous monitoring for drift and emerging disparities.
  • Model governanceWho approved this deployment? Organizational gates requiring fairness evaluation.
  • Deployment checklistIs this system ready? Pre-deployment checklist including bias assessment.

What This Guide Does Not Cover

  • Fairness metric theory and impossibility results — see AI Bias & Fairness Auditing Methods
  • Bias mitigation techniques — this guide covers detection, not remediation
  • Continuous monitoring — see AI Risk Monitoring Systems
  • Legal analysis — consult legal counsel for jurisdiction-specific requirements