The last quarterly safety report released by Tesla gives a mixed impression of the Autopilot system. The company still insists that the driver-assistance technology on its cars makes its cars much safer than those driven only by human beings; however, beyond that flashy headline number, a disturbing pattern is evident.
According to the Tesla Q3 2025 safety report, it is stated that vehicles that were on the Autopilot engaged in one crash per 6.36 million miles driven. This is a promising fact, particularly in contrast to the U.S. national average rate of one crash per 702,000 miles, as reported by the National Highway Traffic Safety Administration (NHTSA) and the Federal Highway Administration (FHWA) using the same data. The Autopilot drivers are approximately 9x less prone to crash compared to the unassisted drivers, according to the calculations made by Tesla.
This number, however, is lower than in the earlier part of the year, when Tesla reported one crash per 7.44 million miles during Q1 2025. Actually, the highest Autopilot safety was observed in Q1 2024, as it was 7.63 million miles between the crashes. This decrease is an indication that although Tesla claims to be selling great improvements in the Autopilot technology, its actual safety performance is declining.

One Retrospective: The Progress of Autopilot
As of 2018, when Tesla started releasing quarterly safety reports, the company had reported one crash per every 3.35 million miles of Autopilot-on and 1.92 million miles of Autopilot-off, compared to the national average of 481,000 miles per year. The current state of 6.36 million miles on paper is not bad. However, when considered within a bigger context, it constitutes a downward trend since the last peaks, and Tesla does not give detailed figures to justify it.
That is one of the most substantial problems with the safety claims of Tesla. The company also publishes only one top-line statistic every quarter, which is the crashes per mile driven with and without Autopilot. It has no breakdown in terms of road type, weather conditions, driver demographics, as well as crash severity. Tesla also does not reveal the information regarding the frequency of Autopilot malfunctions, how many times people have to control their vehicles to prevent any threat, or how many times the software fails to cope with the circumstances.
Usage of Autopilot
The vast majority of Tesla drivers use Autopilot on highways, where the driving conditions are less complicated, there are no pedestrians, no intersections, no bike lanes, and fewer unpredictable accidents. Highway driving is a safer category of driving than city driving, and thus a system that is applied primarily in this setting will naturally seem to work better. Despite that, in China, where traffic is more chaotic, it still manages to drive without any safety intervention.
Autopilot was designed specifically to be used on controlled-access highways; Tesla itself realizes it. Its Traffic-Aware Cruise Control and Autosteer features allow the car to keep up with speed and lane position, which relieves the driver of exhaustion during long journeys. However, this is not autonomous driving. The more developed Full Self-Driving (FSD) software offered by Tesla is designed to navigate in the city streets, but it is still an assurance system that needs to be fully controlled by the human mind.
The Transparency Problem
The Autopilot and FSD difference is of importance since in Tesla marketing, there has been a tendency to obscure the distinction between automation and autonomy. Most critics believe that by labeling its software Full Self-Driving, Tesla emboldens drivers to overtest its features, which results in abuse and preventable accidents.
There are already several investigations by regulators into the incidence of the Autopilot, and at least some have resulted in fatal crashes where the system allegedly failed to notice obstacles or had been disengaged too late.
Until Tesla publishes more information about the actual performance of Autopilot and FSD across different conditions, one can not judge whether the technology is actually improving or it is that it only seems safer because of selective disclosure.
















