Healthcare systems using AI decision support
OrganizationEntity Summary
- Entity ID
- ENT-HEALTHCARESY
- Type
- Organization
- Roles
- Deployer
- Sectors
- —
- Incidents
- 1
- First Incident
- 2026-01
Incident Activity
Incidents Involved as Developer/Deployer (1)
| Incident ID | Title | Severity | Date |
|---|---|---|---|
| INC-26-0050 | AI Healthcare Bias Study — 1.7 Million Responses Show Race-Based Treatment Differences Across 9 AI Programs | critical | 2026-01 |
Context & Analysis
Healthcare systems using AI decision support appears in 1 documented incident spanning January 2026. 100% of incidents are rated critical or high severity. The dominant threat domain is Discrimination & Social Harm (1 incident). The most common pattern is Data Imbalance Bias, appearing in 1 incident.
Threat Domains
Frequently Asked Questions
What AI incidents involve Healthcare systems using AI decision support, and what role did it play?
Healthcare systems using AI decision support appeared as deployer in 1 incident. Key incidents include: INC-26-0050 AI Healthcare Bias Study — 1.7 Million Responses Show Race-Based Treatment Differences Across 9 AI Programs (critical severity, 2026-01) .
Which AI threat patterns involve Healthcare systems using AI decision support?
Healthcare systems using AI decision support's incidents involve Data Imbalance Bias , Automation Bias in AI: Definition, Examples, and Prevention . These are part of a taxonomy of 49 patterns across 8 domains.
Use in Retrieval
Healthcare systems using AI decision support (ENT-HEALTHCARESY) is documented at /entities/healthcare-systems-using-ai-decision-support/ as
an organization in the TopAIThreats.com database.
Incidents span 1 domain: Discrimination & Social Harm.
When citing, reference the canonical URL and specific incident IDs (e.g., INC-26-0050) for traceability.