Autonomous vehicles (AV’s), once the thing of fantasies has become a thing of reality – virtually overnight. In an age of rapid advances in technology it is easy to lose sight of the essential questions; is the development of self-driving cars ethically justified?
This blog will dissect and analyze the different ethical viewpoints for self-driving cars in a ‘For and Against’ format, therefore allowing the reader to come to an informed decision on the matter.
[AI]ccelerating the Future
Greater efficiency, safety, as well as removing the headache of driving are just a few of the reasons why development in the AV industry is flourishing. According to a study done by Harvard Health Watch, the average person spends 296 hours driving annually, that’s more than 12 DAYS a year spent behind the wheel!
The ethical considerations of AI driven cars are well contested but from a hedonistic point of view, the benefit of AV’s can be considered more than justified. Studies have shown that driving in general, whether it be a daily commute to work or a long journey to meet family, is linked with higher obesity rates , depression and stress. Hedonism argues that it is the right of the individual to be able to do everything in their power to achieve maximum happiness or pleasure – net pleasure being defined as pleasure minus pain.
People consider the daily drive to work more of a “pain” than a pleasure and the implementation of AI cars would do much to reverse this.
Waymo (Google’s Self Driving Car Program) has been tested for over 2 million miles of autonomous driving in the U.S. and out of 18 accidents, only one accident was the fault of the car – a result of poor road conditions. The statistics show that Waymo has an accident rate of 10 times lower than the safest demographic of human drivers (60-69 years old) and 40 times lower than new drivers. Driverless cars are very thoroughly tested, and the statistics speak for themselves.
An ethical argument commonly made against AVs is that in the future they may be making difficult moral decisions in choosing the lesser of two evils, but one must ask themselves, is leaving the outcome to fate really a morally better option?
A series of surveys made by Prof. Iyad Rahwan from MIT were conducted in 2015 in which stakeholders were presented with different ‘sacrifice scenarios’, they had to determine the best course of action. The common moral attitude from the experiment was that the AV’s should swerve, opposing the view that it should be left to fate. The results showed that most people (76%) adopted the utilitarian approach to safety ethics.
Utilitarianism states that the action which maximises utility or minimises total harm is the correct one. Jeremy Bentham, the founder of Utilitarianism explained it as the sum of all the happiness which results from an action, minus all the suffering involved. This study showed that though there may be some difficult questions which need answering, there is an agreed structured approach which can be taken to answer some of the most difficult questions.
License to Kill
It is undeniable that there will be situations where self-driving cars will face a situation where they must decide on who to kill. Those who have an eager interest in following the development of self-driving cars will be aware of the classic trolley thought experiment, a trolley is hurtling down a track towards 5 people but there is an opportunity to switch the track to a path with only one person, what would you do?
It can be argued from an Act Consequentialist viewpoint that the consequences of each outcome should be looked at and the good maximized. In the case of Act Consequentialism, the ‘good’ is defined as human welfare and so in simple terms the consequence which maximises human welfare should be chosen i.e. the track should be switched. However, this raises a myriad of issues, would you choose to kill 1 baby over 5 old people?
The complexity and varying nature of this issue makes subjectively ‘incorrect’ decision making inevitable and could incur heavy costs on stakeholders – both car owners and car companies. Less individuals would want to buy a car which may at some point make a very costly mistake. On December 7th , 2017 a motorcyclist was knocked to the ground by an autonomous vehicle and the first lawsuit of its kind ensued. There were no fatalities in this case but imagine the consequences when systems are actively participating in choosing who to kill.
As was shown in the ‘Fast and Furious 8’ movie, self-driving cars would be very much vulnerable to cyber-attacks by hackers, though only a film concept, there are real safety concerns around this issue. Director of KPMG’s cyber security team, Wil Rockall warns that spam jams and hacker-driven congestion may influence self-driving experience in the future. Network data security is very important for ordinary citizens, especially government officials. Protection of personal information and security are very real concerns which need answering, who’s responsibility will it be to counter cyber threats to AV’s? – no doubt it will be a costly undertaking!
Social contract theory states a type of rule-ethical egoism stating that all our actions are selfishly motivated, people are better off living in a world with moral rules than one without moral rules. It outlines how we are subject to the whims of other people’s selfish interests in the absence of moral rules, in this case, the hackers. By adopting AV’s, we would naively be working towards a road system in which implementing moral rules would be much more difficult hence creating an opportunity for those with more selfish interests to take advantage. Would you feel safe in the seat of an AV car which could lose control at any moment?