Each year around 1.25 million people die from car accidents alone, which equates to around 3500 deaths every day. With autonomous vehicles on the roads becoming more of a reality, would the use of these vehicles be the right answer to solve this problem? This article will discuss the reason for and against the use of autonomous vehicles, using ethical viewpoints, mainly focusing on the safety aspect of the technology.
‘Look, no hands…’
According to studies, the most common cause of death in young adults is unintentional injury, being road accidents the vast majority of them. With 94 per cent of fatal vehicle crashes attributable to human error, the potential of autonomous vehicle technologies to reduce deaths and injuries on our roads urges us to action. The artificial intelligence incorporated in autonomous cars exceeds the human’s abilities by collecting numerous information about the current situation. Sensors and cameras feed the system with forecast information, exact positions and speeds of all the other vehicles and thus the system has the ability to predict and eliminate any chance of collision on the road. Even during emergency braking situations, the average human reaction is 0.25s whereas the automated system can react in minimal time.
Using a utilitarianism approach of ‘pleasure vs pain’, it would suggest that the use of AV’s would be a good option since that the pleasure would largely outweigh the pain as this technology could potentially saves hundreds of thousands lives as well as preventing millions of crashes every year.
In addition to reducing car crashes, AV technology can be beneficial to the environment. It can reduce fuel consumption and CO2 emissions by at least 4 percent by accelerating and decelerating more smoothly than a human driver. Traffic jams that often related to car accidents and other human factors will be minimized allowing less stops and slowdowns therefore higher effective speeds and therefore increasing roadway capacity.
A common ethical concern made by the public against AVs is that in the future the AVs will be making moral decisions in emergency situations. But is it really a concern? Do humans have time to weigh up options in an instant and avoid the worst?
According to a number of surveys conducted online, with over 2 million participants from over 200 countries involving different scenarios of accidents in which the vehicle must opt for one of two potentially fatal options. The findings showed that, the most common preferences among the participants were for sparing the lives of many rather than a few which showed that the vast majority of people adopted the utilitarianism approach to safety ethics.
The vast majority of accidents are caused due to Highway Code violations by the driver and this is what we refer to as “human errors”. The major advantage of autonomous vehicles is the human error elimination and given that the AI will have all the rules and regulations in its programming, therefore it follows Kant’s theory of equality, since all vehicles will operate according to the same laws.
No Hands? No Thanks.
Safety for Drivers and Pedestrians
One of the main ethical issues surrounding autonomous cars arise when accidents happen. Although it could be perceived that removing the human element of driving would make our roads safer, accidents involving autonomous vehicles do happen. Many higher-level autonomous systems are still in development but they have already caused accidents, most notably a pedestrian was killed by one of Uber’s self-driving cars in March 2018. This raises the question of whether it is reasonable to put people’s lives at risk by using autonomous cars.
As car manufacturers, they may set the program to maximize the safety of the drivers. However, will pedestrians want this? This isn’t in line with Kant’s absolute command doctrine:
“Act only according to that maxim whereby you can at the same time will that it should become a universal law”
Killing pedestrians is clearly immoral behavior.
In Kant’s view, man is purposeful. No matter who is at all times should not regard themselves and others as tools only but should always regard themselves as the purpose.
Legal Liability and Insurance Issues
Machines can’t guarantee 100% safety rate, because it could be faulty or the algorithm flawed. The failure may come from the risk of control system failure or hacking. When it comes to accidents there is also the issue of who to blame. As the vehicle is controlled by a computer the liability shifts from driver to the manufacturer, intelligent program, the seller, car regulatory agencies, etc. They are subjects that may bear criminal responsibility for accidents. This makes it harder to pursue accountability.
Insurance claims will also become more expensive as autonomous cars cost more to repair due to their complex systems. In addition to the aforementioned shift in liability this means the motor insurance industry will need to adapt if autonomous cars become more widespread.
There is also the effect a rise in driverless car usage could have on some jobs. The emergence of self-driving cars will lead to unemployment for professional drivers. In fact, up to 4 million jobs could be affected. Would it be ethical to put some people out of a job to allow others to enjoy the convenience of driverless cars? An unemployed driver might commit a robbery because he can’t find a new job. Kantian theory is an ethics theory that suggests that for an act to be ethical it needs to be performed in good will. Although the development of driverless technology may be aiming to help people and make life easier, the consequences should also be considered. Manufacturers of autonomous cars wouldn’t like having their jobs taken away, should they have the power to impact other jobs?
We are against using autonomous vehicles on the road.