With driverless cars looking sure to be a staple of our future, and to be commonplace on the roads by 2030 , the ethics behind them is being questioned. Given the unlikely scenario where a fatal accident is inevitable, and the autonomous vehicle must choose between the life of the person in the car itself or pedestrian who happened to be walking by who should the vehicle choose? Should a computer be allowed to decide whether to save the life of its passenger or the pedestrians? Although this scenario is very unlikely AI must be programmed to have an outcome for every possible situation to keep their goals in line with ours, as experts believe that this is the biggest challenge and could cause the most harm if not aligned correctly, with the AI heading towards a ‘roadblock’ . This scenario is much more complex than it first seems.
Autonomous vehicles will have reaction times vastly superior to our own  and will be able to monitor road conditions as they would need to prove their safety before being permissible for road use, but even cutting-edge technological advances aren’t immune to disasters, for example, titanic was deemed unsinkable before its maiden voyage.
Therefore, if a situation arose where a child unexpectedly ran in front of the car would the car swerve killing the occupants or hit the child? In this case most people would prioritise the child but if an older person stepped out from behind a parked car what would the cars decision be then? MIT have produced a survey that queries people about the scenario called the trolley problem which asks who should be spared when different people and animals are in question. Some results are unanimous whereas others are more divided . For example, most people would prioritise humans over animals and many would also choose dogs over criminals, but would this be based on the severity of the crime and would the vehicle have all this information available to make an informed decision? Would the vehicle have to obey the laws of the road where a human driver would be able to ignore them and save themselves?
If the autonomous vehicle therefore knew nothing about the people involved except one is a passenger and the other is outside who should it choose to live? This situation still has its complications as if person B (the person outside the vehicle) stepped out into the road from behind an obstacle without looking and the vehicle had nowhere else to go then why should the passenger be punished for person B’s lack of awareness? If both people were equally as guiltless then the vehicle must choose between a regrettable fatality whilst ensuring that vehicles behind don’t have to make an equally difficult decision or potentially causing an accident involving multiple vehicles which leads to several deaths. Overall the vehicle must choose an option that involves as few people as possible and doesn’t cause a big disruption which unfortunately would result in person B being chosen in this case.
In this situation the car should be acting as an extension of the person driving it, therefore from a hedonistic point of view the car should always prioritise the people in the car as this maximises the ‘pleasure’ which in this case is the safety of the people in the car. Furthermore, Deontism suggests that an action is not deemed right or wrong by the consequences but rather by whether the action was carried out with good intentions. Applying this to the scenario of the automated car the argument can be made that prioritising the safety of the person in the car is just an act of self-preservation so morally the action is justified. Given that there are so many different factors that would change who the public believe should be spared and that the sensors being used in autonomous vehicles aren’t currently capable of determining between people I would argue that they are not yet competent enough to decide and therefore shouldn’t be road legal until they are. However, the counter argument is that as autonomous vehicles can’t determine between rich and poor or young and old that is fairer due to all human life being equal.
Despite opinions that self-preservation may be the fairest outcome when it comes to protecting the passengers of the car, we believe that for the greater good, protecting innocent bystanders and pedestrians is the more moral outcome.
Firstly, cars are already designed to protect the passengers in an accident . They have been pre-designed with crumple zones to reduce the likelihood of a fatality in a car accident. Therefore, why should autonomous cars be any different? Surely, the moral argument would be that a pedestrian’s life is much more valuable than that of a physical car. This can be argued by care ethics, and although a relationship between the passengers and the pedestrian may not exist, a relationship as humans exists which is greater than any relationship between an owner and their car.
In our current society we generally agree that driving a car is a risk, as there is no guarantee of safety. The same way that riding your bike or flying in a helicopter doesn’t guarantee safety. Therefore, if you drive your car you are accepting that at any point an accident could happen that potentially injures you, whether you are at fault or not. Similarly, when riding in a driverless car, the passenger should accept that driving at speed in a car controlled completely by a computer poses a risk. Alternatively, from the perspective of the pedestrian, they have not relied on any means of transport like the passenger has, and provisional statistics suggest that fatal accidents involving pedestrians have increased by 3% for the year ending June 2018 . As cars are inherently dangerous and the passenger has accepted these risks, it is unfair for the pedestrian to pay the price for this. This argument is backed up by virtue ethics as it focuses on the nature of the actor, in this case the passenger was willing to take the risk of driving in an autonomous car and the pedestrian wasn’t.
According to Kantism and Duty Ethics, we should do the right thing, because it is the right thing to do, not because of the outcome. Therefore, the right thing to do in the scenario of protecting the passengers or innocent pedestrians would be to protect the pedestrians who could unfortunately be in the wrong place at the wrong time. A full impact for a pedestrian with a car is more than likely to kill that person, whereas a car impacting another vehicle or a bollard, has significantly less chance of killing the passengers due to the nature of the car and its design. Finally, if the cars were programmed to protect the passengers at all costs, everyday pedestrians may feel unsafe crossing a road, other road users may feel unsafe in their cars and even the passengers of the cars may feel guilty over being prioritised. Ethically, it would be wrong to change the natural course of events for the safety of the passengers but at the cost of pedestrians. From a virtue ethics standpoint, whereby ‘a right act is the action a virtuous person would do in the same circumstances’, it can be implied that most people, viewing the scenario as an external body, would want to preserve the most innocent person, which in most cases would be pedestrians, as they are not in control, in any way, of the outcome. Whereas it could be viewed that a ‘driver’, if paying attention, could spot a potential hazard much quicker than a computer due to instinct and an understanding for the way other people act, thereby implying if the ‘driver’ has not acted, that person would be to blame.
We believe that, based on the reasons mentioned in this article, that autonomous vehicles should protect the ‘driver’ and passengers of the vehicle.