Autonomous Vehicle
A vehicle using AI to navigate and operate without direct human control.
Definition
An autonomous vehicle (AV) is a vehicle equipped with AI systems — including sensors, cameras, lidar, radar, and machine learning models — capable of perceiving its environment, planning routes, and executing driving decisions without direct human input. Autonomous vehicles are classified on a six-level scale (SAE Levels 0–5), ranging from no automation to full autonomy in all conditions. Current commercial deployments primarily operate at Levels 2–4, requiring varying degrees of human supervisory attention. The development of autonomous vehicles represents one of the most prominent real-world applications of AI decision-making in safety-critical environments.
How It Relates to AI Threats
Autonomous vehicles are directly relevant to the Agentic & Autonomous and Human-AI Control threat domains. They represent a high-stakes deployment of AI decision-making where system failures can result in physical harm or death. The sub-category of unsafe human-in-the-loop failures is particularly salient: in Level 2 and Level 3 systems, human drivers are expected to maintain supervisory awareness and intervene when needed, but empirical evidence indicates that humans are poorly suited to this monitoring role, particularly during extended periods of automated operation. The handoff between automated and human control remains an unresolved safety challenge.
Why It Occurs
- Perception systems can fail to correctly identify objects, pedestrians, or unusual road conditions
- Edge cases — rare but consequential scenarios — are difficult to anticipate and train for comprehensively
- Human operators in semi-autonomous vehicles experience attention degradation during extended periods of automated driving
- The transition of control from automated system to human driver introduces a latency period during which neither may be fully in command
- Competitive market pressure incentivises deployment before safety validation is complete across all operating conditions
Real-World Context
The fatal collision involving an Uber autonomous test vehicle in Tempe, Arizona (INC-18-0001) demonstrated the consequences of unsafe human-in-the-loop design. The vehicle’s perception system detected a pedestrian but classified the object inconsistently, while the safety driver — whose role was to monitor the automated system — was not attentive at the time of the incident. The National Transportation Safety Board investigation highlighted failures in both the automated system’s decision logic and the organisational safety culture. This incident contributed to significant regulatory scrutiny of autonomous vehicle testing protocols and underscored the limitations of relying on human backup in semi-autonomous systems.
Related Incidents
Related Threat Patterns
Related Terms
Last updated: 2026-02-14