Humans have been trusting autonomous weaponry to some extent in order to protect their country. However, the application of robot troopers in warfare has prompted a debate among military planners, roboticists and ethicists on the development and utilisation of these robots with minimal human oversight. Armed, autonomous robot is designed and engineered to deal with its environment on its own. It is featured with technology aiding them to understand their physical surrounding and for them to act accordingly. This article reviews on the technological potential of killer robots and ethical considerations that have to be taken into account in its development.
A Solution, Not a Threat
During World War II, around 57 million were killed and 38 million among them were civilians, which made non-combatant casualties higher than military death. The usage of conventional weaponry with no accurate targeting system such as bazooka, grenade and atomic bomb caused unnecessary fatality and destroyed whole perimeter of targeted area. Killer robots could impede such tragedy as their sophisticated technology could identify target better than human senses and execute assassination exactly how it is programmed. Their sight is not limited by fog and their mobility is not restricted by physical limitations, in contrast to human soldiers.
As advancement of Artificial Intelligence (AI) improves, the robots will have the capability to identify a threat by differentiating a person holding a rake or an M16 rifle. They could be programmed to evaluate how prominent their target is in order to achieve a military objective. If such advancement could avoid a catastrophic massacre, is it not an ethical obligation to develop it? Besides, killer robots can potentially replace human soldiers during wartime and will significantly reduce the casualties. This will maximize the element of happiness for the soldiers, their family and the community because the soldiers’ lives are not being jeopardized. Referring to utilitarianism theory, killer robots would be an ethical choice during wartime.
The application of autonomous battlefield robots in warfare have the potential to overcome the psychological shortcomings of humans, for instance; pain, fear, vengefulness and anger, which have the possibility to lead to unethical war crimes. This is because robots are programmed without these emotions. Imagine a soldier in the midst of combat, and his comrades are dying in front of his eyes. There is a tendency that the ability of this particular soldier to perform well in combat to be affected, or he might even unlawfully avenge the death of his comrades. Robots would not be disturbed by these emotions and can act accordingly in ways that will not jeopardize the mission or perform any violation of the laws of war.
The algorithm instilled within killer robots will dictate when they could engage with their target in accordance to Rule of Engagement. Humans get tired, stressed and distracted if subjected to a prolonged and dull task. Robots could overcome these emotions and are able to perform any task for an extended amount of time and ensure all required criteria for engagement are met before execution. In 2009, there was a program developed to ensure an armed, autonomous robot will act within the Laws of War and Rules of Engagement. Such study is significant to guarantee killer robot will act according to existing laws and this is parallel to the Kantian ethics theory.
Technological Accuracy and Accountability
Although the idea of war with minimal casualties remains appealing, the main issue still lies on whether military robots should have the right to decide on whom to kill and whom to spare. With the technology advancement that is already available and those that are in research, there is no doubt that we can build autonomous military robots, but should we?
In April 2013, a team of engineers, AI and robotic experts, other scientists and researchers from 37 countries published the “Scientists’ Call to Ban Autonomous Lethal Robots” statement. This ban was based on the lack of scientific proof that military robots in the future could acquire functions needed for accurate target recognition, situational awareness or in fact decisions with respect to the proportional use of force. This in result will cause an unacceptable level of collateral damage hence the statement concludes by emphasizing that ‘machines should not delegate the decisions in regards of violent force’.
The biggest reason for the concern of opposing autonomous robots would be that the robots would have the ability to choose their own targets. Noel Sharkey, an esteemed computer scientist has issued a ban on ‘lethal autonomous targeting’ as this breaches the Principle of Distinction which is one of the most vital rules in warfare- a robot would find it very difficult to differentiate a civilian and a combatant, which proves to be difficult even for human beings.
It is difficult to determine who is accountable for a crime unlawfully done by robots during war. In contrary with human soldiers, the rules for robot still shows no clarity as there is no human that physically “pulled the trigger”. For instance, suppose the military robot violates the existing law such as Law of Armed Conflict (LOAC) [i.e. cause harm to civilians or the sick and wounded], different parties will be pointing finger. Who should be blamed? Is it the manufacturer? The programmer? Or is the nearest human commander at fault to cause such damage?
Reflecting on Kantian Ethics in the Age of Artificial Intelligence and Robotics, although killer robots incorporate advanced technology to carry the same function if not better than a soldier, human attributes such as taking responsibility of own action is nonexistent within artificial intelligence capabilities. Thus, the lack of accountability should be the reason to revisit such advancement. Knowing that these robots could not be held liable for their action, is it ethical to give them full trust and access in ending human life let alone perform lethal mission in the name of defense? In reality, the complexity of war cannot be simplified in a programmed system. Although some might say robot war is inevitable, these concerns should be evaluated thoroughly to ensure killer robots cause no more harm than war as it is today.
In conclusion, although initially killer robots were invented and developed to be a potential value in achieving greater public good and reducing casualty for the aggressor, does doing so justify the possibility that in time, autonomous armed robots will make war more likely to happen than it already is? If such thing occurs, what will happen to a technologically inferior opponent?
We do not support giving robots the right to kill.
“If we went to war and no one slept uneasy at night, what does that say about us?”