Should Robots Have the Right to Kill?

Group 33

Humans have been trusting autonomous weaponry to some extent in order to protect their country. However, the application of robot troopers in warfare has prompted a debate among military planners, roboticists and ethicists on the development and utilisation of these robots with minimal human oversight. Armed, autonomous robot is designed and engineered to deal with its environment on its own. It is featured with technology aiding them to understand their physical surrounding and for them to act accordingly. This article reviews on the technological potential of killer robots and ethical considerations that have to be taken into account in its development.

A Solution, Not a Threat

During World War II, around 57 million were killed and 38 million among them were civilians, which made non-combatant casualties higher than military death. The usage of conventional weaponry with no accurate targeting system such as bazooka, grenade and atomic bomb caused unnecessary fatality and destroyed whole perimeter of targeted area. Killer robots could impede such tragedy as their sophisticated technology could identify target better than human senses and execute assassination exactly how it is programmed. Their sight is not limited by fog and their mobility is not restricted by physical limitations, in contrast to human soldiers.

As advancement of Artificial Intelligence (AI) improves, the robots will have the capability to identify a threat by differentiating a person holding a rake or an M16 rifle. They could be programmed to evaluate how prominent their target is in order to achieve a military objective. If such advancement could avoid a catastrophic massacre, is it not an ethical obligation to develop it? Besides, killer robots can potentially replace human soldiers during wartime and will significantly reduce the casualties.  This will maximize the element of happiness for the soldiers, their family and the community because the soldiers’ lives are not being jeopardized. Referring to utilitarianism theory, killer robots would be an ethical choice during wartime.

The application of autonomous battlefield robots in warfare have the potential to overcome the psychological shortcomings of humans, for instance; pain, fear, vengefulness and anger, which have the possibility to lead to unethical war crimes. This is because robots are programmed without these emotions. Imagine a soldier in the midst of combat, and his comrades are dying in front of his eyes. There is a tendency that the ability of this particular soldier to perform well in combat to be affected, or he might even unlawfully avenge the death of his comrades. Robots would not be disturbed by these emotions and can act accordingly in ways that will not jeopardize the mission or perform any violation of the laws of war.

The algorithm instilled within killer robots will dictate when they could engage with their target in accordance to Rule of Engagement. Humans get tired, stressed and distracted if subjected to a prolonged and dull task. Robots could overcome these emotions and are able to perform any task for an extended amount of time and ensure all required criteria for engagement are met before execution. In 2009, there was a program developed to ensure an armed, autonomous robot will act within the Laws of War and Rules of Engagement. Such study is significant to guarantee killer robot will act according to existing laws and this is parallel to the Kantian ethics theory.

Technological Accuracy and Accountability

Although the idea of war with minimal casualties remains appealing, the main issue still lies on whether military robots should have the right to decide on whom to kill and whom to spare. With the technology advancement that is already available and those that are in research, there is no doubt that we can build autonomous military robots, but should we?

In April 2013, a team of engineers, AI and robotic experts, other scientists and researchers from 37 countries published the “Scientists’ Call to Ban Autonomous Lethal Robots” statement. This ban was based on the lack of scientific proof that military robots in the future could acquire functions needed for accurate target recognition, situational awareness or in fact decisions with respect to the proportional use of force. This in result will cause an unacceptable level of collateral damage hence the statement concludes by emphasizing that ‘machines should not delegate the decisions in regards of violent force’.

The biggest reason for the concern of opposing autonomous robots would be that the robots would have the ability to choose their own targets. Noel Sharkey, an esteemed computer scientist has issued a ban on ‘lethal autonomous targeting’ as this breaches the Principle of Distinction which is one of the most vital rules in warfare- a robot would find it very difficult to differentiate a civilian and a combatant, which proves to be difficult even for human beings.

It is difficult to determine who is accountable for a crime unlawfully done by robots during war. In contrary with human soldiers, the rules for robot still shows no clarity as there is no human that physically “pulled the trigger”. For instance, suppose the military robot violates the existing law such as Law of Armed Conflict (LOAC) [i.e. cause harm to civilians or the sick and wounded], different parties will be pointing finger. Who should be blamed? Is it the manufacturer? The programmer? Or is the nearest human commander at fault to cause such damage?

Reflecting on Kantian Ethics in the Age of Artificial Intelligence and Robotics, although killer robots incorporate advanced technology to carry the same function if not better than a soldier, human attributes such as taking responsibility of own action is nonexistent within artificial intelligence capabilities. Thus, the lack of accountability should be the reason to revisit such advancement. Knowing that these robots could not be held liable for their action, is it ethical to give them full trust and access in ending human life let alone perform lethal mission in the name of defense? In reality, the complexity of war cannot be simplified in a programmed system. Although some might say robot war is inevitable, these concerns should be evaluated thoroughly to ensure killer robots cause no more harm than war as it is today.

Initial Decision

In conclusion, although initially killer robots were invented and developed to be a potential value in achieving greater public good and reducing casualty for the aggressor, does doing so justify the possibility that in time, autonomous armed robots will make war more likely to happen than it already is? If such thing occurs, what will happen to a technologically inferior opponent?

We do not support giving robots the right to kill.

“If we went to war and no one slept uneasy at night, what does that say about us?”

10 thoughts on “Should Robots Have the Right to Kill?

  1. “the idea of war with minimal casualties remains appealing” – the idea of no war is even more appealing (to me).
    Have a look at Asimov’s Laws of Robotics that were developed in the early 1940’s.

    From my lecturer viewpoint, develop the ethical argumentation please.

  2. Hi, a very good article indeed. There is one question I want to ask, in what way does killer robots helps alleviate the effects of war? It is correct that by using robots, fewer human soldiers are needed to be dispatched, but usually, in war, both sides will not have a balanced technological advancement. Is it ethical for killer robots to be dispatched against forces that only have human soldiers at their disposal? It seems like countries or organisations with this technology will be able to oppress others more easily.

    Other than that, I believe it is better to think of ways to stop a war from happening than finding ideas on how to alleviate the effects of war. War consumes a lot of funds and developing killer robots will not solve reduce the cost of war. Without war, the huge funds could be channelled to charity and research that help benefit humanity (e.g, curing cancer or HIV).

  3. Interesting article.

    I disagree with having killer robots developed. It is as highlighted in your article, I doubt that a killer robot can actually minimise casualties in a war because there is insufficient proof to demonstrate this effect. I think the main flaws for killer robots is that they lack the human-to-human emotions when facing a human counterpart, which I believe is part of care ethics. For example, humans can detect their target’s body language if they actually intend to surrender, perhaps from their facial expression showing fear or trembling from being afraid. In this case, can a robot properly recognise these signs and abort its mission to kill when the target is holding a weapon but is not actually willing to fight anymore? Robots do not have the positive emotions that humans have such as empathy and mercy.

    However, having said that, I also disagree because by developing these robots, it is like an indication that some parties are still considering on waging wars in the future, and I think that itself should be avoided? The intention of some people to even want wars by developing other means to make wars seem ‘positive’ to civilians is alarmingly unethical to me. At the end of the day, these robots will be expensive due to its development costs and its users will only be rich and developed countries. So will these robots be used to oppress less developed countries with their more technologically advanced warfare?

  4. Interesting article. But in my opinion, there is no need to create a robot to kill, because it is just like to encourage war. We should think to avoid the war, not to create a war.

  5. First, robots should not have the right to kill!

    To answer the first issue of both of us have introduced on the technical potential, for me it just gives only 4/10 much better than human. It because, although we see maybe on the physical part some human can`t do without having a machine to assist their job. But either a robot can achieve the capabilities likes human beings? Because human has both equilibrium phase on it’s physical and mental or on its IQ and EQ. Somehow the robots only work by the programmed that has been stated by its designer. Next things is, how robot capabalities can save the data security system of a country when it have been hijack or otherwise?

    On the second issue, the most important when dealing on ethical parts, the war itself must be avoided or demolished in order to respect the idea of ethical. when it comes to war, its no more ethical consideration have been taken as long as the party have get thier own benefit.

  6. I think everyone should stop for a minute and consider this ‘robot’ behaviour trained into most military forces. Automated robots are a curse, no doubt. There will be unintended death either way. This way the denial of unintended death is, of course, will be supported by individuals who already see war as a way to succumb problems between countries.

  7. There are some problems with your arguments.

    Firstly, is that the AI systems being developed do not represent a completely defined system that the programmers in your example have produced. These AIs are programmed to “learn” and then “trained” to the task. If an autonomous AI is trained and released into the war field, it may continue to learn and modify its behaviour without the correcting feedback of a human, which mean, its unpredictable; the longer it “learns” and self-modifies without human feedback, the more unpredictable it becomes.

    Secondly, if a massacre is about to be committed, human, unlike a computer, would question execution. In other words, the human (one anticipates) would not blindly follow instructions in contrary to these AI robots.

  8. The accountability isn’t really the issue: the lethality is what is problematic. As for the autonomous killing robot hardware and software, it is primarily produced by countries with technological advantage. It is a market-place opportunism in that regard, at which countries with limited access to technology will not be able to defend themselves adequately.

  9. Interesting one. As for my point of view, the idea on bringing up killer robots somehow will triggered a vibe on starting a war which are no longer familiar to us these days.

  10. I agree that this is a solution, as machines have more dead eye aim. Therefore will result in less civilian casualty.

    One other way is to only enchance the military gear and maximum defense capability, as so if the soldier goes bad, he is stilla ccountable for his action.

    I believe in the advancement of technology, and a better Future.

Leave a Reply