Tesla’s Autopilot feature which is mainly meant for highway driving is the most advanced computer-controlled driving system available to consumers right now. Like other autonomous driving systems, it combines a bunch of different pieces of technology to let the car collect and process information kind of like a human would and drive itself.
Are Self Driving Cars Safe? Are They Safer than Humans? Pros and Cons
Are Self Driving Cars Safe?
A GPS system gives the car information about things like where it is and the speed limit. A camera on the front of the rear-view mirror and a continental radar sensor on the front grille can scan about 160 meters of the road ahead of the car. There are also 12 ultrasonic sensors that send out pulses and measure how they navigate to map out the 8 meters around the car.
The car’s computer sorts through all the data from the camera and sensors, keeping track of the information it needs to navigate the road lane markers, roadside barriers and the movement patterns of other vehicles. Then, it carefully guides or manipulates the car down the road, watching out for any sudden changes nearby. If a truck in front of the car stops suddenly, for example, the computer will stop the car, as well.
Are Self Driving Cars Safer Than Humans?
Tesla says that Autopilot is in a public beta test, meaning that the feature is still experimental and users are helping catch any bugs or other problems. The driver is supposed to have their hands on the steering wheel at all times, ready to take control of the car if need be. If the system detects that a driver doesn’t have their hands on the wheel, it’ll beep and show multiple warnings, and eventually stop the car. Having an alert driver is really important because there are a lot of conditions where Autopilot can’t work safely. For example, the car’s camera might not be able to see the lane markings on the road if it is snowing. Even driving up a hill can block the camera’s view. In those cases, where the computer realizes that it can’t accurately keep track of everything around it, it’ll turn Autopilot off and tell the driver to take control of the car. Other times, the computer just doesn’t react to sudden changes on the road the way that it should.
Are Self Driving Cars Safe Pros and Cons?
Computers do have advantages over humans when it comes to this stuff: they can constantly monitor every direction without getting distracted or tired, and their response times can be much faster than our reflexes.
Lack of Decision-making skills. The people have their own advantages too. We have really good sensory perception and decision-making skills that are far more advanced than any computer program.
According to Tesla, this is the first fatality in over 1 billion miles that users have driven with Autopilot mode enabled, while with human drivers, on average there’s a fatal car accident for every 150 million kilometers driven. Honestly, this isn’t a valid comparison, Teslas are very safe cars with advanced safety systems, so they’re bound to have fewer deaths per passenger mile with Autopilot on or off. The average car on American roads is more than 10 years old and so are its safety features. And, of course, a sample size of one is pretty statistically useless. Even so, it’s clear that even at this very early stage computer-driven cars seem pretty good at their jobs. But self-driving car systems, of course, aren’t anywhere close to perfect and Tesla is open about Autopilot’s limitations.
Tesla Autopilot Accident
Now Let’s talk about the incident A May 7, 2016 crash in Williston, Florida. where the Autopilot failed to see a turning tractor-trailer. A driver was killed while his car was doing most of the driving for him – in this case, a Tesla Model S with its Autopilot mode enabled. The fact that there haven’t been any fatalities until now is a testament to the technology behind self-driving cars. But this crash is a reminder that this technology has its limitations. Investigators from the National Highway Traffic Safety Administration are still piecing together the details of the accident, but they do know the basics: The driver, Joshua Brown, had his car’s Autopilot mode activated while driving down a highway in Florida. When a tractor-trailer made a left turn in front of the car, the car didn’t stop — it went under the trailer, then hit a fence and a power pole. Brown was killed, and Autopilot didn’t save him. When humans drive cars, we’re following the road, keeping track of people, bikes, and other cars, and looking out for any sudden changes that might mean we have to swerve or stop to avoid an accident.
When the tractor-trailer (which was white) turned left in front of the car, the computer couldn’t see it against the bright sky, so it didn’t hit the brakes and neither did Brown. One of the chief concerns with this Autopilot isn’t technological, it’s psychological. If you believe your Autopilot system is going to save you, you might not worry so much about being distracted. An aftermarket DVD player was found in Brown’s Tesla. Whether he was watching it at the time of the crash is unknown, but that is the kind of behavior that an Autopilot system might encourage whether or not it’s expressly forbidden. More research needs to be done to make these systems better, but also on how they affect driver behavior, and how to ensure that drivers are using them properly. Either way, Brown’s death wasn’t Autopilot’s fault. The system didn’t cause an accident by driving dangerously. The driver and Autopilot both failed to detect a sudden change and prevent a crash. In some ways, the software has, for years been a matter of life and death, but never so much as with self-driving cars. They have a good track record and the technology will keep improving. But their human driving partners are a necessary part of the safety of these systems, and that’s going to be the case for quite a long time.