HomeGuideThe Ethical Dilemma of Self-Driving Cars: Can Self-Driving Car Be Programmed to...

The Ethical Dilemma of Self-Driving Cars: Can Self-Driving Car Be Programmed to Kill You for Saving More People?

Your self-driving car is cruising down the highway and suddenly a group of pedestrians steps out into the road, and there’s no time to stop. So your car has to choose between hitting the pedestrians or swerving off the road and crashing into a ditch. What will it do? Many people are questioning AI decision-making as we move closer to a world of autonomous vehicles.

Some people argue that the car should be programmed to save as many lives as possible, even if that means sacrificing the life of the person in the car. Others argue that the car should be programmed to protect its occupants at all costs. Some experts believe that artificial intelligence could eventually be used to determine who dies in a car crash. This raises several ethical concerns.

Many people would find it difficult to accept that a machine could make such a decision involving a life stake. There is also the risk that AI could be biased against certain groups of people if, for example, it was programmed to prioritize the safety of those with a higher social status.

The Ethical Dilemma of Self-Driving Cars

Robots or AI tools are not programmed to copy-paste the human behavior or decision-making instead, they learn from enormous databases to execute operations like identifying a shade of color based on sophisticated mathematical algorithms derived from relevant data. Deaths due to AI self-driving system errors may cultivate moral dilemmas, much like “the trolley problem.” The problem is that the technology has a long way to go before it can drive people safely on its own in real-world situations.

The Ethical Dilemma of Self-Driving Cars
Picture Courtesy: Mark Beach

Trolley Problem of Self-Driving Cars

The trolley problem is a thought experiment in ethics first introduced by philosopher and economist Philippa Foot in 1967. It is one of the best-known problems in philosophy. The problem is this:

You are standing on a trolley track, next to a lever. If you decide to pull the lever, the trolley will swap to a different track. Unfortunately, that track has five people tied to it, and the trolley will kill them if you switch tracks. The only way to save them is to throw a switch that will lead the trolley down a different set of tracks. However, if you do nothing, the trolley will stay on its current track, where it will kill one person who is tied to that track. Choosing so would result in sacrificing those five people tied on the current track. You will be sacrificing one person to save five others.

What should you do?

There is no easy answer to this question as some people argue that you should pull the lever because killing five people is worse than killing one. Others argue that you shouldn’t pull the lever, because you would be responsible for the deaths of those five people.

Is it morally right to sacrifice one life to save five others? This is the question that the trolley problem poses. Some people may say that it is morally right to save the five people, as they are innocent and have not done anything to deserve death. Others may say that it is not morally right to kill someone, even if it is to save several other lives.

Elon Musk swears by the efficacy of their self-driving system and even says that Tesla auto-pilot cars are ten times safer than manual drives. He said, “Even if you, for argument’s sake, reduce fatalities by 90 percent with autonomy, the 10 percent that do die with autonomy are still gonna sue you. The 90 percent that are living don’t even know that that’s the reason they’re alive.”

The trolley problem to highlights the importance of making ethical decisions. However, it also shows how difficult it can be to make the right decision when faced with a difficult situation. Even the right decision by the AI can be termed as wrong when looked at from a different perspective like Tesla’s case.

Moral Issues with Self-Driving Cars

The trolley problem has been adapted to many different scenarios, including self-driving cars. In this version of the problem, a self-driving car is barreling down the road. The only way to avoid hitting five people is to swerve into a nearby building, killing the one person inside. Is it morally right to sacrifice one life to save five others?

What Should Self-Driving AI Choose?

As with the original trolley problem, there is no easy answer. People must weigh the pros and cons of each option before making a decision. Crashing into the building would save five lives, but it would also mean taking a life.

Some people may say that it is morally right to swerve into the building, as it would save five innocent lives. Others may say that it is not morally right to take a life, even if it would save several other lives. This decision is not an easy one to make, and people will have different opinions on the matter.

Can Self-Driving AI Choose Who Dies?

Many experts have raised concerns over the ethical implications of self-driving cars. One of the main issues is what happens when the car has to make a decision that could result in someone’s death. For example, if the car is about to hit a pedestrian, should it swerve and risk killing the driver instead?

Some people believe that AI systems are not capable of making such moral decisions and therefore should not be trusted with life-or-death situations. Others argue that AI systems could be designed to take into account ethical considerations and that they may even be better at making such decisions than human drivers.

Some people may argue that a self-driving system, as an AI, is capable of making such a decision and should be held accountable for any resulting death or injury. Others may argue that the system is not truly AI and therefore should not be held responsible. Ultimately, the decision of whether or not to hold a self-driving system accountable for death or injury will come down to a moral judgment.

Automobile leaders are grappling with the idea of giving AI an upper hand in choosing who dies. Tesla, on the other hand, has taken a different approach. The company’s founder and CEO, Elon Musk, has said that Tesla’s cars will not be programmed to prioritize the safety of their occupants. Instead, they will be designed to avoid accidents altogether. This is a more difficult task, but Musk believes it is the only ethically acceptable option.

There is no uncomplicated answer to this question, but it is critical to consider all of the potential implications before self-driving cars become more widespread. It’s a dilemma that no one wants to face: if you had to choose between killing someone on the road or killing the driver, what would you do?

How are the Leading Automobile Makers Dealing with the Dilemma and Ethical Concerns with Self-Driving Cars?

In theory, a self-driving car should be able to make a better judgment between killing someone on road or killing the driver. However, the leading automobile makers are still struggling to find a consensus on how to deal with this issue.

Some companies, such as Volvo, have decided to take a stand against any form of an autonomous vehicle that could make such decisions. They have pledged that their vehicles will never be programmed to choose between multiple casualties. Volvo has said that its self-driving cars will be programmed to prioritize the safety of their occupants over that of other road users.

Other companies, like Tesla, believe that giving the car the ability to make these decisions is the only way to ensure safety for all involved. Tesla has so far been less forthcoming about how its autonomous vehicles will deal with such situations. This is perhaps not surprising, as it is a sensitive topic that could have a major impact on public perceptions of self-driving cars.

AI tools and systems can grow exponentially with more and more data to learn from to become safe. Near-future autonomous cars would see more safety features like Tesla’s eye cameras and sensors to maintain human supervision while on auto-pilot. The transition from “hands-on” to “hands-off” to “eyes off” to “mind off” and, eventually, to “no steering wheel” will occur with autonomous driving.

Regardless of how individual companies approach the issue, it is clear that AI-powered cars will need to be able to make life-or-death decisions at some point in the future. The reality is that, until this issue is sorted out, self-driving cars will continue to pose a moral dilemma for both the companies that make them and the consumers that use them. It is an issue that is not likely to be resolved anytime soon.

Legal Issues of Self-Driving Cars

The legal landscape surrounding self-driving cars is rapidly evolving as the technology advances and becomes more integrated into society. As of a few years ago, numerous states in the U.S. had already started to introduce legislation specific to driverless vehicles, indicating a legislative trend that’s keeping pace with technological development​1​. Ethical dilemmas, such as those encountered in accident-related instances, represent a significant portion of the concerns with self-driving cars. These dilemmas need urgent attention, as the cars are already facing real-world scenarios that test their decision-making frameworks.

The ethical issues at play go beyond hypothetical, narrowly focused dilemmas, often portrayed in thought experiments, and cover a broad spectrum of realistic concerns. Strong and divergent opinions on autonomous vehicles suggest that reaching a consensus on how these vehicles should operate in morally ambiguous situations will be challenging. This is compounded by the fact that moral choices are not universal, which complicates the establishment of a universally accepted moral code for self-driving vehicles​.

Furthermore, legal bodies in different jurisdictions are beginning to propose protections for human drivers of autonomous vehicles in the event of crashes due to technical faults. For instance, the Law Commissions in the UK have suggested that companies should be held accountable for the driving aspects of self-driving cars, indicating a shift towards corporate responsibility for autonomous decision-making on the road. These developments underscore the complex interplay between advancing automotive technology, ethical decision-making, and the evolving legal frameworks intended to govern such innovations

Conclusion

The trolley problem highlights the importance of making ethical decisions. When faced with a difficult choice, it is important to consider all of the possible outcomes before making a decision. There is no easy answer to this question, but it must be considered as self-driving cars become more prevalent. After all, these cars will be making life-or-death decisions, and we need to be sure that they are prepared to handle such responsibility.

Self-driving cars are becoming increasingly common, and with that comes the question of how these cars will make decisions in situations where there is no clear answer. This could have a profound impact on society and the way we think about ethics. Automakers are doing their best to ensure safety with existing AI and robotic technology of self-driving systems.

Purnima Rathi
Purnima Rathi
Purnima has a strong love for EVs. Whether it's classic cars or modern performance vehicles, she likes to write about anything with four wheels, especially if there's a cool story behind it.

14 COMMENTS

  1. This isn’t a problem, just an excuse to over analyse a situation.

    The trolley and car autonomy is exactly the same. At a given moment in time, you have limited information to act and do so with that information ( wait ) and a level of training. But never perfect training which you want to imply.

    You do not see the future.
    You cannot assume anything.

    There is an object to avoid.
    You have limited space.
    The clear space on the road is available to brake or swerve.
    There is no option to leave the road or choose another object to collide with, as what you thought was 1 person instead of 5, could actually be 52 just behind it. You do not have the time or information.

    So the trolley problem is simple.
    You don’t actually know you are going to kill anybody.
    With no time you avoid the first issue then you deal with the next.
    With time and knowledge of only the immediate both known obstacles you stay the course. As changing will only increase the chance of more unknown objects coming into play.

    Just break and swerve only if space.
    Really! It’s not that hard.

    • All of these lifeboat logic problems are stupid to begin with. Asking people who have never held people’s lives in their hands what they would do in situations where in reality they would have fractions of a second to act proves nothing and demonstrates even less.

  2. I actually think u messed up the trolley problem. The way I remember it the train would hit 5 people if u do nothing and one person if u switch the track. No one is every going to pull the switch to kill more people, but they pull it to save 5 and sacrifice one. This is the proper and more interesting format for the trolley problem in my opinion. I could be wrong though

  3. The other thing is that ai will not even be asking these questions. It’s simply tries to avoid hitting things. It will also be far better at not getting into these situations in the first place as it’s never not paying attention, looking at its phone, changing the radio, yelling at kids etc etc. The IA is forever vigilant and never rests, and once all vehicles are controlled by the ai and communicating with each other there will be no such thing as a car accident. There will also be no need for traffic lights or for a vehicle to ever stop At an interesting etc. This could all be easily done now actually. The only thing holding us back now is human drivers sharing the roads with ai.

  4. Does it really matter? In real life we don’t usually have knowledge of the consequences past the first obstacle why should we expect a machine to know what we can’t.

    As an answer to the question even though it is unrealistic if you want to save more lives have the car prioritize the occupants, that will speed up adoption and save lives in the long run

  5. Self driving cars won’t suit Indian road conditions I think..only in sophisticated roads they may be suitable..

  6. I think people expect too much from a computer. Five people step out onto a high speed road. Well, there’s five folks who are just too dumb to live. The computer needs to take the side of the people who bought it. Keep the car and its occupants safe is job 1. Expecting moral decisions from a machine is idiotic. Hell, expecting moral decisions from a human is idiotic 9 out of 10 times.

  7. Volvo have it entirely wrong. The occupants of the self-driving car have taken a decision to be in that car. It is their decision & their responsibility. The innocent third parties have not had any opportunity to be involved in the decision making process.
    The self-driving AI system must always be programmed to avoid third party injury.

    • If I spend over 50K on a self driving car it had better have MY best interests at heart. If people decide to play in traffic they should accept responsibility for their idiocy. Play stupid games, win stupid prizes.

  8. 2 Mike O’HANLON
    And now consider the hitman who jumps in front of a self-driving car with a businessman or politician, forcing the car to turn into a concrete block and to self-destruct. Perfect case…

  9. Car commercials routinely depict driving at high speed on pavement, through tunnels, and in off-road terrain. At least one is roaring along a narrow residential street and turning through an intersection without stopping. Cars, SUVs and trucks are available with over 700hp, EVs over 1,000. And 0-60 times are down to two seconds. These companies don’t care about safety except where it’s mandatory. They just want to sell, sell, sell.

    Speeding is the major risk factor among human drivers, reducing reaction times and challenging vehicle limitations of which most drivers are unaware; until the pop quiz. Yet, the most mundane 4cyl family sedan is allowed to reach 110mph? Tesla’s FSD allows running above the posted speed limit in videos I’ve seen. Seems to me we have some more fundamental ethics of our own to address before we bring our AI vehicles to the trolley tracks.

  10. In self driving cars safety is aimed at preventing accidents or stopping accidents happening.Who is going to die or who is getting injured will depend on the situation passengers in the cars are secured with seat belts and other precautions to reduce injuries and death.Stopping 3rd party injury should be primary aim I feel.

  11. Agree. In the cinema recently I had the misfortune of having to watch 5 advertisements for new cars. Speed and a VDU panel seemed to be the selling point. It seems every advert contains more emphasis on a VDU panel and what music you can play through it. The euphoria of selling cars and how wonderful the VDU was. Infact the VDU was God! In my waiting for the film to start no mention from train companies etc about the need to use their transport to reduce C02 . Why aren’t train companies pushing the benefits of train travel? Yes the famous 5 now and again. Again It’s all bonkers, self driving cars on road infrastructures hopelessly underfunded, already outdated with cyclists being exposed to self driving cars?
    Self driving cars giving way to pedestrians at junctions ? Cyclists passing inside self driving cars?
    Good to have vision but as usual the, let’s bypass filling in the gaps to obtain the vision and bu..er everyone else.
    It’s corporate nonsense being forced fed to us.
    Mike

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular