My AI dog could nuke you!

Group 4

Introduction

Autonomous weapons have been the subject of many sci-fi films since the conception of robotics. From “The Terminator” to “Ex Machina”, AI weapons were always works of fiction. However, recent advances in technology have allowed for the development of “Unmanned warfare”. The most recent use of such technology came in late November 2020 (Khodadadi and van Hagen 2020), when Moshen Fakhrizadeh was shot dead in his car’s seat by an AI machine gun while his wife, who was merely 25 cm away, was completely unharmed.

Several questions arise:

  1. Is the world prepared for AI weapon systems?
  2. Can they be used to eliminate specific targets with no collateral damage?

Can their use breach our very human rights?

Arguments for AI weapons

Autonomous Weapon Systems (AWS) are becoming more prevalent in modern warfare. They could perform more ethically than soldiers on a battlefield as any emotions would not impair their judgements. In any form, the weapon is pre-programmed with an objective; it will aim to follow said objective with neither sense of self-preservation nor damage to itself (Toscano n.d.). Furthermore, AWS does not function on Maslow’s hierarchy of needs as they only require enough power to complete a mission. This may result in a more humane, cautious robotic super-soldier, upholding rule utilitarianism (Chonko n.d.).

Rule utilitarianism is aligned with utilitarianism but takes into account fairness and justice surrounding a situation. We can gather that AWS should reduce civilian casualties when undertaking an objective to be better. For example, a roadside checkpoint in Iraq, where seven women and children were shot and killed by soldiers, failed to stop (McCarthy 2003). An AWS such as QinetiQs’ MAARS (QINETIQ n.d.) could be used to detect the level of threat present in such situations and use more precise weaponry to disable a vehicle rather than its occupants. This negates the need for a “shoot first, ask questions later” approach to such situations and could save innocent lives.

AWS effectively removes human error present in extreme circumstances that can cause severe harm and consequences. Within the military, human error has resulted in a significant loss of human life, such as when the USS VINCENNES shot down an Iranian civilian Airbus during the Iran-Iraq war in 1988 (Khodadadi and van Hagen 2020), not to mention the significant friendly fire incidents from misidentification both in and out of combat, such as the pair of US Army Blackhawks, shot down by friendly F-15s in 1994 (1994 Black Hawk shootdown incident n.d.). AWS are better equipped to distinguish between civilians/civilian objects from combatants and military objectives, and can in some instances, perform even better than human combatants and lead to a more significant elimination of pain and suffering.

Some argue that the deployment of AWS is morally wrong as it is disrespectful to be killed by a machine. However, a counterargument is that the alternative could be by “over the horizon” weapons, indirect fire, and buried IEDs, which subjectively does not change whether a combatant sees who killed them (Noone and Noone 2015). The approach of AWS is a utilitarianism one, the development of which is designed to produce an outcome that will prove more beneficial than an alternative, one that possibly involves more significant loss of human life. 

Arguments against AI weapons

By incorporating AWS into the methods of conducting warfare, one can expect infringement of fundamental human rights. The 1949 Geneva convention on humane warfare conduct requires that principles such as military necessity, discrimination between combatants and non-combatants, and proportionality between the value of the military objective and the potential for collateral damage need to be assessed during times of war continuously. AI-powered AWS are arguably unable to comprehend such notions. It is reasonable to assume that AWS would be prone to mistakes. In that case, an accountability issue arises, where it is not possible to hold anyone responsible for the autonomous actions of an independently functioning robot (Russell, et al. 2015). 

Another significant ethical concern arises when autonomous machines are solely appointed for warfare. When removing life from the battlefield, fear for the war’s fatal consequences to human lives ceases to be of concern. Thus, the threshold to go to war may be lowered. This phenomenon would be linked to humans’ emotional detachment from the action of war and could potentially lead to war actions that could otherwise have been avoided (Ekelhof and Struyk 2014). In a way, the public could also be desensitised to the images of war-induced destruction and devastation. The media broadcasts would not include footage of dead individuals but rather destroyed machines, leading to war being regarded as a less detrimental action than it truly is.

AI has seen advancements in data management, machine learning, facial recognition, design, banking, and healthcare (Insider n.d.) (Baidu 2021). However, the significant time, money, and research spent to integrate it into militaries worldwide will introduce major disadvantages relating to its other uses and, as a result, limit its potential. Warfare related use of AI threatens to tarnish its reputation as a whole and result in more resistance to its adoption and slower progress. This culminated in an open letter written in 2015, calling for a ban on autonomous weapons. This was co-signed by several respected figures in the technology industry such as Elon Musk (founder of Tesla, SpaceX, and Neuralink, which use AI to a significant extent), Stuart Russel (computer scientist with substantial contributions to the field of AI), Eric Horvitz (Microsoft Research Director), as well as hundreds of other industry experts, academics, and researchers (An Open Letter – RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE n.d.).

A final disadvantage faced is that the development of autonomous weapons means that they will be used in warfare scenarios and eventually for policing and border control. This will put civilians and refugees, usually unarmed ones, in the path of these killer machines, highly increasing the risk of error and then death or injury. It is a known fact that the presence of weapons in a household significantly increases the risk of accidental discharges, injuries, and fatalities; there is no reason to think this will be any different (Harvard Injury Control Research Center n.d.).

Initial Decision

As a group, based on all the up-to-date evidence, we do not endorse the use of AWS for the conduction of warfare.

References:

n.d. 1994 Black Hawk shootdown incident. https://en.wikipedia.org/wiki/1994_Black_Hawk_shootdown_incident.

n.d. An Open Letter – RESEARCH PRIORITIES FOR ROBUST AND BENEFICIAL ARTIFICIAL INTELLIGENCE. https://futureoflife.org/ai-open-letter/?cn-reloaded=1.

Baidu. 2021. MIT Technology Review. https://www.technologyreview.com/2021/01/14/1016122/these-five-ai-developments-will-shape-2021-and-beyond/.

Chonko, Larry. n.d. Ethical Theories.

Ekelhof, Merel, and Miriam Struyk. 2014. “Deadly Decisions-8 Objections to killer robots.” http://www.paxforpeace.nl/. https://www.paxvoorvrede.nl/media/files/deadlydecisionsweb.pdf.

n.d. Harvard Injury Control Research Center. https://www.hsph.harvard.edu/hicrc/firearms-research/gun-threats-and-self-defense-gun-use/.

Insider, Business. n.d. Artificial Intelligence News. https://www.businessinsider.com/artificial-intelligence.

Khodadadi, Amin Hossein, and Isobel van Hagen. 2020. “Iranian passenger flight incident a grim echo of U.S. downing of airliner in 1988.” NBC News, 25 July.

Kleinman, Zoe. 2020. “Mohsen Fakhrizadeh: ‘Machine-gun with AI’ used to kill Iran scientist.” BBC News, 7 December.

McCarthy, Rory. 2003. “Seven women and children shot dead at checkpoint.” The Guardian, 1 April.

Noone, Gregory P., and Diana C. Noone. 2015. “The Debate Over Autonomous Weapons Systems.” Cae Western Reserve Journal of International Law.

QINETIQ. n.d. MAARS Weaponized Robot. https://www.qinetiq.com/en-us/capabilities/robotics-and-autonomy/maars-weaponized-robot.

Russell, Stuart, Sabine Hauert, Russ Altman, and Manuela Veloco. 2015. “Robotics: Ethics of artificial intelligence.” nature – International weekly journal of science, 27 May.

Toscano, Christopher P. n.d. ““Friend of Humans”: An Argument for Developing Autonomous Weapons Systems.” In When Weapons Become Warriors. Not Published.

3 thoughts on “My AI dog could nuke you!

  1. As Autonomous Weapon Systems may solve some solutions the problems they will create will likely outweigh the solutions greatly. Controlling people through fear and threat is not an efficient way to maintain order and will inevitably violate human rights.

  2. Opening statement. The problem is clearly stated and there is a clear dilemma.
    I think this is a good topic, and there are good cases to be made for both sides of the issue.

    Arguments for: An acceptable use of ethical theories, but do look at expanding those used. For example Kant’s theory can be used as support since AWS doesn’t discriminate. Focus on improving this for Assignment Two

    Arguments against: There is a poor of ethical theories. It’s clear from your text that you are considering them, but you need to clearly state which of the theories support you arguments against!
    Focus on improving this for Assignment Two

    Advice for Assignment Two: What stakeholders can be identified? What Options for action are there? A win-win could be suggested, perhaps.

    Try and drum up more comments. I’m perfectly OK with you striking deals – whereby you comment on other articles and they comment on yours.

  3. An interesting read, all the pro’s to AWS present a strange, unimaginable future for our generation. But perhaps once they become more common, people will get used to them. It’s certainly possible that future generations resign manned warfare to the history books.

    Personally though, I think that unless AWS is developed to keep the peace (rather than actually fight wars) it is entirely a bad idea. We should hope that AI is developed away from the morals (and budgets) of war and that something like the trolley problem is a worst case scenario rather than a primary function.

Leave a Reply