As AI (Artificial Intelligence) technology advances, it is proliferating in many aspects of our lives. It is used in personal assistants for our phones and homes, and in predictive algorithms scaling from use in multi-billion-dollar companies such as Amazon, to household thermostats capable of behavioural adaptation. However, a controversial new application has recently emerged, the ethics of which must be examined closely.
The US Navy has revealed plans to install an AI system known as ‘CLAWS’ to its Orca-class robot submarines; this high level of AI would allow them to perform a wide range of operations without a human controller. These submarines have a modular payload bay, which is currently outfitted with 12 torpedo tubes. This combination would allow them to sink targets without human input.
According to one article, the Navy said, ‘The Orca will have well-defined interfaces for the potential of implementing cost-effective upgrades in future increments to leverage advances in technology and respond to threat changes.’ (Morrison 2020) This causes the question to arise: could these submarines be adapted to kill autonomously?
Weapons are tools used to conduct conventional wars and protect national security; advancing weaponry has thus been a research focus in many countries for millennia. This development of military technology promotes development of civilian technology, such as aircraft, GPS and microwave ovens. The development of weaponised AI, such as “CLAWS”, therefore also promotes a wider context of technological innovation. The AI-controlled submarine requires a reduced number of docking times and only returns to port when the submarine needs maintenance or resupplying, allowing comparatively more time to actively complete tasks than manned vessels. From a utilitarian point of view, these points can be considered positive. Utilitarianism champions the greatest increase in happiness for the greatest number of people. “CLAWS” can both better protect the country through more efficient operation (keeping citizens safer and thus happier), whilst promoting wider technological progress (improving happiness through lifestyle) (van de Poel 2011).
Because submarines operate in a harsh environment, it is difficult to escape in distress, so the mortality rate is extremely high. Using AI decreases demand for manned submarines, reducing casualties in combat. It is likely that the Orca submarines will be used to detect and destroy sea-mines, creating an overall safer environment for all manned vessels. In the potential scenario of the submarines targeting manned enemy vessels, removing the responsibility to kill from individuals should also reduce the cases of associated mental health conditions such as PTSD. Using AI for these applications removes human error and allows more rapid response times than those of human operators, greatly reducing the likelihood of accidents. The sense of duty required to strive for this reduction in death and suffering is good will, and good will is central to Kantian ethics.
Despite these advantages that “CLAWS” holds over remote human operators, there are some questions that arise: Should an intelligent but unemotional robot have the right to kill a person, even in war? Are there laws restricting the behaviours of robots?
One key issue is that technological progress should not overlook ethical restrictions – the purpose of which is not to hinder scientific development, but to provide ethical support and regulations. Intelligent robots should be developed to improve the quality of human life and enable people to better enjoy the associated benefits, not to put lives at risk. Designing “CLAWS” is therefore immoral from the perspective of the actor, and thus virtue ethics: it is not virtuous to recklessly overlook well-established principles. Accountability for war crimes is another major concern. Complex systems must be implemented to regulate and restrict the behaviour of the robots, but there are severe implications if these systems fail. Who should be responsible for any damage or death caused by AI-controlled systems behaving unlawfully? The robot itself cannot be punished within the existing legal system; persecuting its manufacturer or involved Navy personnel, however, could be seen as equally unjust due to the AI’s autonomy. This could lead to legal loopholes, subverting justice, which goes against virtue ethics and its focus on the nature of the acting person. Through its lens, avoiding responsibility and acting unjustly is immoral, so allowing this scenario is unethical. The consequences of this scenario also means “CLAWS” would not be supported by utilitarianism, as there would not be an overall increase in happiness.
Duty ethics advocates equality and reciprocity. Equality requires affording individuals equal concern and respect. Reciprocity is the treatment of humanity not as a means to a goal, but as an end. Both of these requirements are not breached only by the application of AI to submarines, but arguably all forms of warfare. The automation of killing simply furthers the lack of reciprocity practiced in war.
In the “Three Laws of Robotics“, proposed by science-fiction writer Asimov, a robot primarily must not harm a human being or itself, and must obey orders. Whilst this is an oversimplification of a complex problem, the three laws provide a clear outline of the ideal relationship between humans and robots. The Navy’s unmanned submarine program clearly violates these principles, which assert that robots should not be used as lethal weapons.
The number and ethical implications of the negative points against “CLAWS” lead to the conclusion that it is not a morally just application of engineering, despite the advantages it may bring.