Skip to main content
TopAIThreats home TOP AI THREATS
Governance Concept

Protected Characteristics

Legally defined attributes such as race, gender, age, disability, and religion that anti-discrimination law prohibits as bases for adverse treatment in decisions affecting individuals.

Definition

Protected characteristics are personal attributes that national and international law identifies as impermissible grounds for discrimination. Common protected characteristics include race, ethnicity, gender, age, disability, religion, sexual orientation, and national origin, though the specific list varies by jurisdiction. These legal protections exist because historical patterns of discrimination based on these attributes have caused systemic harm. In the context of AI systems, protected characteristics take on new significance because machine learning models can inadvertently — or deliberately — use these attributes, or proxies for them, as factors in consequential decisions about employment, credit, healthcare, and criminal justice.

How It Relates to AI Threats

Protected characteristics are central to the Discrimination and Social Harm Threats domain, particularly within the proxy-discrimination sub-category. AI systems trained on historically biased data may learn to correlate protected characteristics with outcomes, perpetuating or amplifying existing patterns of discrimination. Even when protected attributes are explicitly excluded from model inputs, proxy variables such as postal codes, browsing patterns, or linguistic features can serve as indirect indicators. This creates a challenge for fairness: technical compliance with non-discrimination rules does not guarantee equitable outcomes when the underlying data reflects structural inequalities. Regulators increasingly require that AI systems undergo bias audits examining disparate impact across protected groups.

Why It Occurs

  • Training datasets encode historical patterns of discrimination that models learn and reproduce
  • Proxy variables correlated with protected characteristics allow indirect discrimination despite exclusion of direct attributes
  • Fairness definitions are mathematically incompatible, forcing trade-offs in system design
  • Organisational incentives may prioritise predictive accuracy over equitable treatment across groups
  • Regulatory frameworks vary across jurisdictions, creating inconsistent protection standards

Real-World Context

Multiple documented cases have shown AI systems producing discriminatory outcomes along protected characteristic lines. Hiring algorithms have penalised resumes associated with female candidates, healthcare risk-scoring systems have underestimated the needs of Black patients, and facial recognition systems have exhibited higher error rates for darker-skinned individuals. The EU AI Act classifies AI systems used in employment, credit, and public services as high-risk, requiring conformity assessments that include evaluation of bias across protected characteristics.

Last updated: 2026-02-14