AI Controlled Submarines: Safer Seas, or the Dawn of Automated Warfare?

Group 9

As AI (Artificial Intelligence) technology advances, it is proliferating in many aspects of our lives. It is used in personal assistants for our phones and homes, and in predictive algorithms scaling from use in multi-billion-dollar companies such as Amazon, to household thermostats capable of behavioural adaptation. However, a controversial new application has recently emerged, the ethics of which must be examined closely.

The US Navy has revealed plans to install an AI system known as ‘CLAWS’ to its Orca-class robot submarines; this high level of AI would allow them to perform a wide range of operations without a human controller. These submarines have a modular payload bay, which is currently outfitted with 12 torpedo tubes. This combination would allow them to sink targets without human input. 

According to one article, the Navy said, ‘The Orca will have well-defined interfaces for the potential of implementing cost-effective upgrades in future increments to leverage advances in technology and respond to threat changes.’ (Morrison 2020) This causes the question to arise: could these submarines be adapted to kill autonomously? 

Positives

Weapons are tools used to conduct conventional wars and protect national security; advancing weaponry has thus been a research focus in many countries for millennia. This development of military technology promotes development of civilian technology, such as aircraft, GPS and microwave ovens. The development of weaponised AI, such as “CLAWS”, therefore also promotes a wider context of technological innovation. The AI-controlled submarine requires a reduced number of docking times and only returns to port when the submarine needs maintenance or resupplying, allowing comparatively more time to actively complete tasks than manned vessels. From a utilitarian point of view, these points can be considered positive. Utilitarianism champions the greatest increase in happiness for the greatest number of people. “CLAWS” can both better protect the country through more efficient operation (keeping citizens safer and thus happier), whilst promoting wider technological progress (improving happiness through lifestyle) (van de Poel 2011).

Because submarines operate in a harsh environment, it is difficult to escape in distress, so the mortality rate is extremely high. Using AI decreases demand for manned submarines, reducing casualties in combat. It is likely that the Orca submarines will be used to detect and destroy sea-mines, creating an overall safer environment for all manned vessels. In the potential scenario of the submarines targeting manned enemy vessels, removing the responsibility to kill from individuals should also reduce the cases of associated mental health conditions such as PTSD. Using AI for these applications removes human error and allows more rapid response times than those of human operators, greatly reducing the likelihood of accidents. The sense of duty required to strive for this reduction in death and suffering is good will, and good will is central to Kantian ethics.

Negatives

Despite these advantages that “CLAWS” holds over remote human operators, there are some questions that arise: Should an intelligent but unemotional robot have the right to kill a person, even in war? Are there laws restricting the behaviours of robots?

One key issue is that technological progress should not overlook ethical restrictions – the purpose of which is not to hinder scientific development, but to provide ethical support and regulations. Intelligent robots should be developed to improve the quality of human life and enable people to better enjoy the associated benefits, not to put lives at risk. Designing “CLAWS” is therefore immoral from the perspective of the actor, and thus virtue ethics: it is not virtuous to recklessly overlook well-established principles. Accountability for war crimes is another major concern. Complex systems must be implemented to regulate and restrict the behaviour of the robots, but there are severe implications if these systems fail. Who should be responsible for any damage or death caused by AI-controlled systems behaving unlawfully? The robot itself cannot be punished within the existing legal system; persecuting its manufacturer or involved Navy personnel, however, could be seen as equally unjust due to the AI’s autonomy. This could lead to legal loopholes, subverting justice, which goes against virtue ethics and its focus on the nature of the acting person. Through its lens, avoiding responsibility and acting unjustly is immoral, so allowing this scenario is unethical. The consequences of this scenario also means “CLAWS” would not be supported by utilitarianism, as there would not be an overall increase in happiness. 

Duty ethics advocates equality and reciprocity. Equality requires affording individuals equal concern and respect. Reciprocity is the treatment of humanity not as a means to a goal, but as an end. Both of these requirements are not breached only by the application of AI to submarines, but arguably all forms of warfare. The automation of killing simply furthers the lack of reciprocity practiced in war.

In the “Three Laws of Robotics“, proposed by science-fiction writer Asimov, a robot primarily must not harm a human being or itself, and must obey orders. Whilst this is an oversimplification of a complex problem, the three laws provide a clear outline of the ideal relationship between humans and robots. The Navy’s unmanned submarine program clearly violates these principles, which assert that robots should not be used as lethal weapons.

Initial Decision

The number and ethical implications of the negative points against “CLAWS” lead to the conclusion that it is not a morally just application of engineering, despite the advantages it may bring.

3 thoughts on “AI Controlled Submarines: Safer Seas, or the Dawn of Automated Warfare?

  1. This is a very nice article, which makes good use of ethical reasoning leading to the current decision. In Assignment Two have a look if the other theories could also support/oppose but overall this is what I like to see in an article.

    I really like the mention of the Laws of Robotics, which also goes to show that anticipating future developments is a good use of our time.

  2. A very interesting article, with insightful discussion of AI ethics.
    I think my particular area of interest is regarding who is responsible if the AI malfunctioned, or even if the US government claimed that it malfunctioned as a cover for attacking an adversary. (I do not trust the US to not use this nefariously). The principle of responsibility for one’s actions when engaging in conflict, and being accountable is extremely important.
    I would be interested to gain more insight on how exactly an AI would determine a target/threat, and what would happen if the system was hacked into.
    I like the idea of submarine crews no longer having to be stuck in a metal tube for months, however I personally don’t think this outweighs the concerns regarding responsibility and accountability.
    Good article!

  3. This is a great article, and I agree with your final decision. It’s interesting to see how utilitarianism can be adapted to fit a number of different opinions. Another point to be made against it is that weapon development is using up finite resources, my own feeling is that government money and global resources should be spent on solving other problems before developing yet more weapons. What about the impact on the ocean? There must be some serious ramifications for the marine life.

    Interesting thoughts on robotics law – it’s impossible for legislation to keep up with the rate of technological development, just look at the issues caused by the evolution of the internet.

Leave a Reply