Autonomous vehicles should protect the ‘driver’ and its passengers at all costs!

Group 13

With driverless cars looking sure to be a staple of our future, and to be commonplace on the roads by 2030 [1], the ethics behind them is being questioned. Given the unlikely scenario where a fatal accident is inevitable, and the autonomous vehicle must choose between the life of the person in the car itself or pedestrian who happened to be walking by who should the vehicle choose? Should a computer be allowed to decide whether to save the life of its passenger or the pedestrians? Although this scenario is very unlikely AI must be programmed to have an outcome for every possible situation to keep their goals in line with ours, as experts believe that this is the biggest challenge and could cause the most harm if not aligned correctly, with the AI heading towards a ‘roadblock’ [2]. This scenario is much more complex than it first seems.

Autonomous vehicles will have reaction times vastly superior to our own [3] and will be able to monitor road conditions as they would need to prove their safety before being permissible for road use, but even cutting-edge technological advances aren’t immune to disasters, for example, titanic was deemed unsinkable before its maiden voyage.

Therefore, if a situation arose where a child unexpectedly ran in front of the car would the car swerve killing the occupants or hit the child? In this case most people would prioritise the child but if an older person stepped out from behind a parked car what would the cars decision be then? MIT have produced a survey that queries people about the scenario called the trolley problem which asks who should be spared when different people and animals are in question. Some results are unanimous whereas others are more divided [4][5]. For example, most people would prioritise humans over animals and many would also choose dogs over criminals, but would this be based on the severity of the crime and would the vehicle have all this information available to make an informed decision? Would the vehicle have to obey the laws of the road where a human driver would be able to ignore them and save themselves?  

If the autonomous vehicle therefore knew nothing about the people involved except one is a passenger and the other is outside who should it choose to live? This situation still has its complications as if person B (the person outside the vehicle) stepped out into the road from behind an obstacle without looking and the vehicle had nowhere else to go then why should the passenger be punished for person B’s lack of awareness? If both people were equally as guiltless then the vehicle must choose between a regrettable fatality whilst ensuring that vehicles behind don’t have to make an equally difficult decision or potentially causing an accident involving multiple vehicles which leads to several deaths. Overall the vehicle must choose an option that involves as few people as possible and doesn’t cause a big disruption which unfortunately would result in person B being chosen in this case.

In this situation the car should be acting as an extension of the person driving it, therefore from a hedonistic point of view the car should always prioritise the people in the car as this maximises the ‘pleasure’ which in this case is the safety of the people in the car. Furthermore, Deontism suggests that an action is not deemed right or wrong by the consequences but rather by whether the action was carried out with good intentions. Applying this to the scenario of the automated car the argument can be made that prioritising the safety of the person in the car is just an act of self-preservation so morally the action is justified. Given that there are so many different factors that would change who the public believe should be spared and that the sensors being used in autonomous vehicles aren’t currently capable of determining between people I would argue that they are not yet competent enough to decide and therefore shouldn’t be road legal until they are.  However, the counter argument is that as autonomous vehicles can’t determine between rich and poor or young and old that is fairer due to all human life being equal.

Despite opinions that self-preservation may be the fairest outcome when it comes to protecting the passengers of the car, we believe that for the greater good, protecting innocent bystanders and pedestrians is the more moral outcome.

Firstly, cars are already designed to protect the passengers in an accident [6]. They have been pre-designed with crumple zones to reduce the likelihood of a fatality in a car accident.  Therefore, why should autonomous cars be any different? Surely, the moral argument would be that a pedestrian’s life is much more valuable than that of a physical car. This can be argued by care ethics, and although a relationship between the passengers and the pedestrian may not exist, a relationship as humans exists which is greater than any relationship between an owner and their car.

In our current society we generally agree that driving a car is a risk, as there is no guarantee of safety. The same way that riding your bike or flying in a helicopter doesn’t guarantee safety. Therefore, if you drive your car you are accepting that at any point an accident could happen that potentially injures you, whether you are at fault or not. Similarly, when riding in a driverless car, the passenger should accept that driving at speed in a car controlled completely by a computer poses a risk. Alternatively, from the perspective of the pedestrian, they have not relied on any means of transport like the passenger has, and provisional statistics suggest that fatal accidents involving pedestrians have increased by 3% for the year ending June 2018 [7]. As cars are inherently dangerous and the passenger has accepted these risks, it is unfair for the pedestrian to pay the price for this. This argument is backed up by virtue ethics as it focuses on the nature of the actor, in this case the passenger was willing to take the risk of driving in an autonomous car and the pedestrian wasn’t.

According to Kantism and Duty Ethics, we should do the right thing, because it is the right thing to do, not because of the outcome. Therefore, the right thing to do in the scenario of protecting the passengers or innocent pedestrians would be to protect the pedestrians who could unfortunately be in the wrong place at the wrong time. A full impact for a pedestrian with a car is more than likely to kill that person, whereas a car impacting another vehicle or a bollard, has significantly less chance of killing the passengers due to the nature of the car and its design. Finally, if the cars were programmed to protect the passengers at all costs, everyday pedestrians may feel unsafe crossing a road, other road users may feel unsafe in their cars and even the passengers of the cars may feel guilty over being prioritised. Ethically, it would be wrong to change the natural course of events for the safety of the passengers but at the cost of pedestrians. From a virtue ethics standpoint, whereby ‘a right act is the action a virtuous person would do in the same circumstances’, it can be implied that most people, viewing the scenario as an external body, would want to preserve the most innocent person, which in most cases would be pedestrians, as they are not in control, in any way, of the outcome. Whereas it could be viewed that a ‘driver’, if paying attention, could spot a potential hazard much quicker than a computer due to instinct and an understanding for the way other people act, thereby implying if the ‘driver’ has not acted, that person would be to blame.

Initial Decision

We believe that, based on the reasons mentioned in this article, that autonomous vehicles should protect the ‘driver’ and passengers of the vehicle.

References

[1]https://www.wired.com/story/when-will-self-driving-cars-ready/

[2]https://www.theverge.com/2018/7/3/17530232/self-driving-ai-winter-full-autonomy-waymo-tesla-uber

[3] https://eandt.theiet.org/content/articles/2017/02/ultrafast-camera-boosts-reaction-time-for-self-driving-vehicles-and-drones/

[4] https://mic.com/articles/192103/driverless-cars-prioritize-passenger-or-pedestrian-safety-study-shows-how-millions-feel#.mLau3xc1Y

[5] https://www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/

[6] https://auto.howstuffworks.com/car-driving-safety/safety-regulatory-devices/crumple-zone.htm

[7] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/754685/quarterly-estimates-april-to-june-2018.pdf

6 thoughts on “Autonomous vehicles should protect the ‘driver’ and its passengers at all costs!

  1. This is a well-written article with lots of ethical reasoning! It’s rather difficult to add any comments to it.

    However, I can think of an obvious one. In your graphic we have an alarming situation, surely a driverless car should be slowing down to stop as it approaches a pedestrian crossing? 😉

  2. A very interesting question with a well-written response. This article had me changing my mind on the topic with every paragraph I read, however in the end I agree with the decision you came to.
    Consider the situation where a politician is riding in an automated vehicle and someone who disagrees with or hates this politician wants to harm them. If the vehicle didn’t prioritise the occupants, the car would swerve and probably seriously injury the occupants at the least. I believe these attacks would be too easy, and become too common for this system to be suitable. If the occupants are prioritised, it is much more likely that most problems will be accidents, rather than intentional attacks.

  3. This is an excellent read, and I tend to agree with the final decision made.

    I do wonder however, what level of programming or machine learning would be required such that a vehicle could “spot a potential hazard” to be able to react in the same way as human instincts? At this point the ethics in question become very similar to if a human was controlling the vehicle, and not a computer.

  4. Valuable points were made on either side of this debate.
    It’s a convenient fact that more people are more willing to buy an autonomous car if they know it will prioritise their own life. Glad to know it’s more moral to have the cars programmed that way too!

  5. A great read with reasonable points on both sides of the debate. I have to agree with the final conclusion. It will be interesting to see how this issue is dealt with for the not so distant future.

  6. This is a fantastic piece of writing that covers an issue that will be at the forefront of the next several years.

    I tend to agree with the final decision that has been made in the article, however, I see an issue with whichever way the issue is solved, that there will always be the case of human error and we will never get rid of it. This could be in the programming or other elements out of our own control, we have seen issues arise with current programming of cars not seeing red lights or obstacles, this has caused crashes to happen when people have been driving using the automated service, we will also have the issue of human error if they are driving.

Leave a Reply