The companies leading the race to a fully autonomous car have tested a wide range of sensor packages to allow cars to sense the world around them. Most competitors of Tesla have implemented a combination of LiDAR, radar, ultrasonic sensors, and cameras. Tesla, however, has chosen to tread in an altogether contrary direction: it has adopted an entirely camera-based approach to Full Self-Driving (FSD). In each case, Elon Musk has repeatedly stated that this is not only less expensive but also safer.
LiDAR and Radar: The Arguments Against Them
LiDAR (Light Detection and Ranging) is a form of LiDAR using laser pulses to generate a high-resolution 3-D map of the surroundings. Radar works with radio waves to determine the remoteness and relative speed of objects. These sensors work very well in some circumstances, particularly in low light or in bad weather. However, Musk claimed that they can pose a threat to the safety of self-driving technologies.
Lidar and radar reduce safety due to sensor contention. If lidars/radars disagree with cameras, which one wins?
This sensor ambiguity causes increased, not decreased, risk. That’s why Waymos can’t drive on highways.
We turned off the radars in Teslas to increase safety.…
— Elon Musk (@elonmusk) August 25, 2025
His premise is sensor conflict. The resolution, latency, and error characteristics of each sensor modality vary. Cameras give much higher color and texture resolution compared to range or point information in LiDAR or radar, which can often be very sparse and limited to semantically abstracted ranges of values. The outputs of these sensors may differ, and the system would need to be told which one to believe. That arbitration is technical and fallible.
According to Musk, who stated it in one interview, “When radar and vision disagree, then you have a problem.” It is not good when your system has to make guesses: sometimes one is right, sometimes the other. He argues that we humans operate with vision and mental mapping of the world without additional sensing capabilities. In reproducing that process, he says, the surest way to safe autonomy is thereby had.
The All-Camera Philosophy
The FSD system is now based on a system that exclusively utilizes a complex of eight cameras located on the sides of the car that forms a 360-degree image. These cameras feed into an advanced neural net that has been trained on millions of real-world driving examples. The network fulfills visual perception tasks like object detection, lane line recognition, interpretation of the traffic lights, and prediction of trajectory. To fully incorporate these changes, they also rolled out new hardware, a front-facing camera.
The core philosophy is that when a machine can see and perceive the environment as well as or better than a human driver, then the machine can drive safely even in the absence of extra sensors. Trained on a massive data set of human driving behavior, Tesla has been able to develop an understanding not only of what objects are but also of how they move in relation to each other, so its system makes context-sensitive decisions.
This is also the solution that is non-complex in nature and less expensive in terms of hardware. LiDARs may cost hundreds or thousands of dollars per automobile, and radar puts more pressure on processing, wiring, and integration challenges. The removal of the latter will decrease the number of parts and simplify the production, which is consistent with Tesla’s policy of vertical integration of production and reducing prices.
Visual-Only Autonomy: The Advantages and Disadvantages
Critics of Tesla’s vision-only strategy claim that it comes at the cost of redundancy. Redundant sensing modalities may provide a backup when one of the sensors stops functioning or is impaired. An example of this is that radar can be used to sense objects in the presence of fog or excessive rainfall when cameras can be ineffective. One good example is where Tesla’s vision only crashes into a fake road wall during a safety test.
Tesla argues that it is not more sensors but proper use of the right ones that counts the most—cameras. Prioritizing vision, Tesla can devote additional computing resources to the training and inference of the neural networks, as opposed to the merging of different data feeds. Another technique deployed by the company is the use of the so-called pseudo-LiDAR, i.e., stereo depth estimation as well as monocular depth networks to infer 3-D geometry based on 2-D images.