Should Autonomous Robots be Fighting on the Front Lines?

Group 84

As modern technology improves, the possibility of robots replacing soldiers in the front lines becomes more and more likely. Already, units such as the Foster-Miller TALON exist, which is a multi-terrain rover that can be equipped with weaponry and controlled remotely [1]. Currently, these types of semi-autonomous robots are restricted because there must be a human who gives the order to attack a target [2]. However, facial recognition and other similar advances in artificial intelligence could mean that unmanned units might soon have the capability to make those kinds of decisions for themselves. Should this be allowed to happen? Or have we already gone too far?

A Distinct Advantage

The current cost of a TALON weapons robot is approximately $230,000 [3]. This is inarguably a small price to pay to save a human life. Robots such as these allow soldiers to fight from as far as a kilometre away [3], keeping them far from danger. And they don’t only protect soldiers from enemy fire – the HAZMAT TALON uses radiation and gas sensors to scout locations and determine if they are safe for troops to enter [3].

Currently, there can be a delay between the command to shoot and the action of the robot, due to the signal taking time to be sent to the machine. In a front-line situation, this delay could be the difference between life and death. A robot able to make an incredibly fast decision autonomously would likely have an edge over human combatants, both in reaction time and accuracy, therefore giving their side a distinct advantage in military engagements.

Lt. Col. Dave Grossman, in his 1996 book On Killing: The Psychological Cost of Learning to Kill in War and Society, studied how soldiers were affected by war. He discovered that PTSD (post-traumatic stress disorder) is a severe problem in modern warfare. Grossman asserts that everyone who returns from war experiences PTSD, but that people are affected and cope with it differently. The effects of this damaging condition would be lessened with the introduction of autonomous robots on the front line of modern engagements. Some argue that, because robots do not feel compassion, they would be bad soldiers. However, it is perhaps that quality that would make them so useful and would save others from experiencing traumatic mental issues.

It is also important for governments to stay ‘ahead of the curve’ when it comes to technologies. Terrorist groups or similar organisations having control of high-tech robotics could be disastrous for the soldiers on the front line. Only by being one step ahead can we be sure that civilians and soldiers alike are kept safe. Indeed, there are already some defensive autonomous weapons. The Russian T-14 Armata tank has an unmanned turret capable of tracking 25 ground targets simultaneously and provides an automatic way to destroy any target if commanded to [4].

If killing in war is justified for the soldiers who do it, then it shouldn’t matter who pulls the trigger. Once artificial intelligence gets to such a level that it can distinguish enemies with the same accuracy as a human fighter, the morals of war are irrelevant, because they are decided by governments and treaties. Only the efficacy of the process becomes important.

In July 2015, a group of people concerned with the use autonomous technology in warfare signed an open letter calling for a ban on autonomous weapons at an international joint conference on artificial intelligence. The letter warns, “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.” The letter was signed by several high-profile individuals such as Elon Musk (inventor and co-founder of Tesla), Stephen Hawking (Physicist) and Steve Wozniak (Co-founder of apple). The raised questions on the Morality, Responsibility and Legality of the use of AI technology within warfare.

When discussing the morality of robots within warfare we must first define the three categories of autonomy. These are based on the level of human involvement, the categories are human-in-the-loop, human-on-the-loop, and human-out-of-the-loop weapons. [5]

“Human-in-the-loop weapons are robots that can select targets and deliver force only with a human command.” For example, Israel’s Iron Dome system detects incoming rockets, predicts their trajectory, and then sends this information to a human soldier who decides whether to launch an interceptor rocket. [6]

Even with a purely defensive example like this there are significant questions that need to be asked. Such as where does the moral and legal responsibility lie if for example the algorithm fails to detect incoming rockets, Is the team that coded the weapon then at risk of prosecution for manslaughter?

Further risks on allowing the use of Artificial intelligence are in the ramifications of the rapidly evolving and self-improving nature of artificial intelligence and deep-learning networks. Critics argue that allowing the use of AI technology now, whilst initially primitive and with narrow strategic uses, but with the significant capital and technological resources of the military and government. These machines may someday become tools with devastating power, capable of locating targeted individuals anywhere around the globe or shutting down entire cities power networks. These so called ‘superintelligence’ have to be prepared for and measures such as preventing these kinds of technologies getting into the wrong hands have to be put in place before they are a reality. [7]

As AI technology improves, some governments could start using autonomous weapons. These weapons can manoeuvre through different environments on their own. Even if the technology is so advanced, no one can guarantee that no error could occur which could cause a disaster. This topic has raised questions like, who would be accountable for the death of someone who was killed by mistake when the killer is a robot? [8] Who would be liable for the actions of this autonomous killing machine?

References

[1] “TALON Tracked Military Robot – Army Technology”, Army Technology, 2019. [Online]. Available: https://www.army-technology.com/projects/talon-tracked-military-robot/. [Accessed: 21- Mar- 2019].

[2] “Drones in Contemporary Warfare: The Implications for Human Rights”, LSE Human Rights, 2019. [Online]. Available: https://blogs.lse.ac.uk/humanrights/2016/07/07/drones-in-contemporary-warfare-the-implications-for-human-rights/. [Accessed: 21- Mar- 2019].

[3] “Foster Miller Tallon –Military Robots”, Edinformatics.com, 2019. [Online]. Available: https://www.edinformatics.com/math_science/robotics/foster_miller_talon.htm. [Accessed: 21- Mar- 2019].

[4] Armyrecognition.com, 2019. [Online]. Available: https://www.armyrecognition.com/russia_russian_army_tank_heavy_armoured_vehicles_u/t-14_armata_russian_main_battle_tank_technical_data_sheet_specifications_information_description_pictures.html. [Accessed: 21- Mar- 2019].

[5] Bonnie Docherty, Losing Humanity: The Case against Killer Robots (Cambridge, MA: Human Rights Watch, 19 November 2012), 2, accessed 10 March 2017, https://www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots.

[6] https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/

[7] Unidir.ch, 2019. [Online]. Available: http://www.unidir.ch/files/publications/pdfs/the-weaponization-of-increasingly-autonomous-technologies-artificial-intelligence-en-700.pdf. [Accessed: 21- Mar- 2019].

[8]“Autonomous Weapons: An Open Letter from AI [Artificial Intelligence] & Robotics Researchers,” Future of Life Institute website, 28 July 2015, accessed 8 March 2017, http://futureoflife.org/open-letter-autonomous-weapons/.

Initial Decision

We are against autonomous robots fighting on the front lines.

14 thoughts on “Should Autonomous Robots be Fighting on the Front Lines?

  1. I agree with this article and the best example of humans making decisions against military protocol is Stanislav Petrov who saved the world by not sending a nuclear warhead to destroy America when the early nuclear detection system relayed that a nuclear missile was about to hit russia. He went against protocol and did not fire the nuclear weapons. It was later found out that the system was defective and was giving incorrect information. An AI system would have not done this and the Earth would have been destroyed by nuclear armageddon.

  2. Although using artificial intelligence might save human lives, there is always the risk of their reliability. For example, they will probably be wirelessly controlled making them vulnerable to hacking. This makes them more dangerous than helpful.

  3. my thoughts, virtual reality can only measure the users response time under his current conditions. these will obviousey change dramatically under real warfare.

  4. You’ve introduced an interesting topic and discussed the pros and cons but not included any ethical reasoning as far as I can see. Can you point me to it please?

Leave a Reply