The promise of full self-driving technology has long been framed around a single, compelling argument: machines don’t get distracted, drunk, or drowsy. With the NHTSA reporting that 94% of road accidents stem from human error, the case for autonomous vehicles sounds compelling on paper. But as more self-driving and semi-autonomous cars log miles on American roads, the safety data is proving to be more complicated — and more nuanced — than the industry’s early optimism suggested.
A Growing Fleet, a Growing Dataset
The federal government began systematically tracking autonomous vehicle crashes in 2021, when the NHTSA required companies to report crashes involving cars with automation, splitting them into two categories: ADS (Automated Driving Systems) covering higher-level automation like Waymo and Cruise robotaxis, and Level 2 ADAS (Advanced Driver Assistance Systems) like Tesla Autopilot, GM Super Cruise, and Ford BlueCruise.
That reporting requirement has produced a detailed — if sobering — picture. Between 2021 and 2024, there were 3,979 incidents involving autonomous vehicles in the United States, resulting in 496 injuries and 83 fatalities. The numbers continue to rise year over year, though that increase is partly explained by fleet growth rather than a deterioration in safety performance.
The Crash Rate Paradox
“One of the most cited statistics in the self-driving debate is the per-mile crash rate. For every 1 million miles driven, there are 9.1 self-driving car crashes, compared to a crash rate of 4.1 per million miles for conventional human-driven vehicles.”, says law firm Bayley and Galyen, a Fort Worth truck accident lawyer.
On the surface, that looks damning. However, context matters considerably. Self-driving vehicles operate disproportionately in dense urban environments — cities like San Francisco, Phoenix, and Austin — where traffic is heavier and the probability of low-speed collisions is higher regardless of who or what is driving.
Equally important is the nature of the crashes themselves. For fully autonomous vehicles, damage most commonly occurs at the rear — 54% of the time — suggesting that human drivers behind them are frequently the at-fault party. Studies have also found that injuries in autonomous vehicle crashes tend to be less severe than those in human-driven accidents.
Tesla vs. Waymo: Two Very Different Profiles
The two biggest names in autonomous driving tell starkly different safety stories. Tesla has reported the most crashes among semi-autonomous vehicles, with 2,093 incidents, while Waymo leads in fully autonomous vehicle crashes with 907 incidents reported. But those numbers require important context. Tesla’s fleet is massive — over three million U.S. vehicles driving more than 30 billion miles annually — making a direct comparison with Waymo’s far smaller robotaxi operation statistically misleading.
The NHTSA also began investigating Tesla’s Full Self-Driving system after multiple reports of crashes that occurred in low-visibility conditions. The agency’s scrutiny has intensified as Tesla’s system has expanded its capabilities and public availability.
The Human Factor Remains
A recurring theme in autonomous vehicle safety research is that many crashes don’t reflect technology failure — they reflect misuse. Drivers in semi-autonomous vehicles often over-trust the system, disengaging from the task of monitoring the road even when the vehicle requires their oversight. When the system suddenly hands control back to the driver, people are caught off guard — a phenomenon researchers call “automation complacency.” This handoff problem remains one of the most stubborn challenges facing Level 2 systems in particular.
The Road Ahead
Public attitudes are shifting, if slowly. A recent AAA Foundation for Traffic Safety survey found that those expressing trust rather than fear in self-driving cars increased from 9% to 13% between 2024 and 2025. That’s still a slim slice of the population, but it reflects a gradual warming as consumers accumulate more real-world experience with the technology.
The long-term projections remain ambitious. A KPMG report predicts that by 2050, autonomous vehicle technology could reduce the frequency of accidents by nearly 90%. Whether that vision materializes depends on how quickly engineers can solve edge cases, how effectively regulators can establish consistent national standards, and how well automakers can communicate to drivers what their systems can — and cannot — do.
For now, the verdict is mixed. Full self-driving technology shows genuine promise in reducing the most catastrophic types of human-error crashes, but it introduces new risks and failure modes that are still being understood. The data is accumulating. The question is whether the industry and its regulators can keep pace with it.
