The US Army recently had to assuage fears over their so called ‘killer robots’ after plans to upgrade their current Advanced Targeting and Lethality Automated System (ATLAS) in ground combat vehicles’ weaponry emerged. The Defence Department aim to use ATLAS alongside tank crews to “acquire, identify, and engage targets at least three times faster” than is currently possible.
‘Artificial Intelligence’ (AI) weaponry is also presenting itself as a dilemma in a much broader context. As many countries continue to develop technologies, it is key to question the ethics of militaries adopting AI for their use.
Death is inevitable in war, but death can either be relatively painless or mean slowly bleeding out on a battlefield. The US defence secretary Jim Mattis stated a core aim of the US military is to be as “lethal” as possible. AI weaponry has a greater lethality than is conventional; it can decrease the intensity and duration of pain before death, showing it can be moral using a Utilitarian ‘moral balance sheet’. This idea can also be affirmed from Deontology using the categorical imperative that you shouldn’t doom someone to a slow painful death as you yourself would not want it.
In the past it can be seen that militaries have developed technologies to use in (possible) warfare which have then had purposes away from war – as shown by the US military developing nuclear reactors. This shows there is a chance that good (pleasure) can come from the pain of war, in the form of improving people’s lives. This good could encompass the lives of many more people than the pain of war. From a Utilitarian perspective, it can then be considered that the balance of good against bad in the ‘Hedonistic Calculus’ is tipped to be positive due to the extent of the good that ensues from innovation.
Furthermore, this idea of driving technology can be seen as virtuous, since Aristotle (the founder of Virtue Ethics) believed that there are intellectual virtues such as wisdom, the pursuit of which is fundamentally moral.
The virtue of justice is key to both Aristotle and the list of Cardinal Virtues. This can be used to justify that using weapons which can accurately kill the enemy and therefore end a war quicker can be virtuous and therefore moral if the aim for the war is just, such as to free civilians from oppressive rule.
AI has also begun to be developed for advanced targeting systems through machine learning, finding the ‘right people to kill’. The US government currently regulates AI using Directive 3000.09, which may need to be amended if there is progress into “Explainable AI”. AI weapons which have these targeting systems would decrease the chance of civilian casualties. This links with Just War Theory, which splits into whether it is just to go to war (Jus ad Bellum), covering the reason and intention for war, and the just way to act in war (Jus ad Bello), which states that there needs to be a distinction so only other combatants, not civilians, are targeted. Therefore, AI targeting systems can be classed as moral in war if used in this way.
An engineer developing weapons and defense would have a relationship with the soldiers in the military. It is the engineer’s duty to create technology which helps the soldier. From the standpoint of Care Ethics, soldiers are dependent on the engineer for technology which can decrease the danger they are in, and are vulnerable if their weapons are outclassed by the enemy. Therefore, it would be moral for an engineer to create AI targeting as it would be considered good care.
Whilst a case can be made that AI weaponry should be developed, there exist good reasons why this technology should be adopted with caution at the very least. Advanced targeting technologies which are optimised to kill efficiently greatly increase the rate of deaths; both sides of a dispute implementing such weaponry will result in a greater number of deaths compared to if traditional weapons were used. According to the Principle of Utility, actions which cause the greatest happiness to the greatest number (and therefore the least pain) are moral. AI weapons on the other hand, could be considered immoral as they would take more lives, causing more relatives pain as they mourn their bereavement.
A country developing AI assisted weapons technology, even only with the intent of use for defensive purposes, could cause political tensions, as the military advantage this country would possess would disrupt the balance of power and deterrents. The USA’s “3rd Offset Strategy” provides a clear example of this, seeking to maintain and extend their competitive, technological and operational advantages over the rest of the world. From the perspective of Care Ethics, the countries with inferior weapons will become dependent on the countries possessing AI technology. It would be unethical for a country to develop AI weaponry, as it would make others vulnerable and dependant on its decisions. This is a clear detriment to intercountry relationships.
AI assisted weapon technology directly contradicts the Aristotelian Virtue of courage. A person remotely operating a machine from a safe distance, instead of facing their adversaries on the battlefield in a fair combat is a sign of cowardice, an unvirtuous and thus unethical act.
Additionally, AI weapons would provide an unfair advantage to those countries owning the technology; using Deontology we could argue if one country wanted to develop AI weapons, they’d have to be prepared to live in a world where intelligent weaponry threatened them too. Using the Universality Principle the question should therefore be: ‘would humans want to live in a world where everyone had the right to develop smart weapons, or worse, everyone already had intelligent weapons, with the ability to kill whomever they wanted?’ Deontology would suggest this would result in an undesirable world, thus it is not moral for any country to pursue the development of AI weaponry.
We think in an ideal world it wouldn’t be moral to develop AI weaponry. However, given that some governments will always seek a military position of power, it would be immoral to prohibit developments that would allow all countries to maintain a deterrent against being attacked.