YOU WON’T BELIEVE HOW FAST THE US CAN KILL YOU WITH AI! CLICK HERE TO FIND OUT!

Group 48

The US Army recently had to assuage fears over their so called ‘killer robots’ after plans to upgrade their current Advanced Targeting and Lethality Automated System (ATLAS) in ground combat vehicles’ weaponry emerged. The Defence Department aim to use ATLAS alongside tank crews to “acquire, identify, and engage targets at least three times faster” than is currently possible.

‘Artificial Intelligence’ (AI) weaponry is also presenting itself as a dilemma in a much broader context. As many countries continue to develop technologies, it is key to question the ethics of militaries adopting AI for their use.

Multipurpose Unmanned Tactical Transport used by the US

Death is inevitable in war, but death can either be relatively painless or mean slowly bleeding out on a battlefield. The US defence secretary Jim Mattis stated a core aim of the US military is to be as “lethal” as possible. AI weaponry has a greater lethality than is conventional; it can decrease the intensity and duration of pain before death, showing it can be moral using a Utilitarian ‘moral balance sheet’.  This idea can also be affirmed from Deontology using the categorical imperative that you shouldn’t doom someone to a slow painful death as you yourself would not want it.

In the past it can be seen that militaries have developed technologies to use in (possible) warfare which have then had purposes away from war – as shown by the US military developing nuclear reactors. This shows there is a chance that good (pleasure) can come from the pain of war, in the form of improving people’s lives. This good could encompass the lives of many more people than the pain of war. From a Utilitarian perspective, it can then be considered that the balance of good against bad in the ‘Hedonistic Calculus’ is tipped to be positive due to the extent of the good that ensues from innovation.

Furthermore, this idea of driving technology can be seen as virtuous, since Aristotle (the founder of Virtue Ethics) believed that there are intellectual virtues such as wisdom, the pursuit of which is fundamentally moral.

The virtue of justice is key to both Aristotle and the list of Cardinal Virtues. This can be used to justify that using weapons which can accurately kill the enemy and therefore end a war quicker can be virtuous and therefore moral if the aim for the war is just, such as to free civilians from oppressive rule.

AI has also begun to be developed for advanced targeting systems through machine learning, finding the ‘right people to kill’. The US government currently regulates AI using Directive 3000.09, which may need to be amended if there is progress into “Explainable AI”. AI weapons which have these targeting systems would decrease the chance of civilian casualties. This links with Just War Theory, which splits into whether it is just to go to war (Jus ad Bellum), covering the reason and intention for war, and the just way to act in war (Jus ad Bello), which states that there needs to be a distinction so only other combatants, not civilians, are targeted. Therefore, AI targeting systems can be classed as moral in war if used in this way.

An engineer developing weapons and defense would have a relationship with the soldiers in the military. It is the engineer’s duty to create technology which helps the soldier. From the standpoint of Care Ethics, soldiers are dependent on the engineer for technology which can decrease the danger they are in, and are vulnerable if their weapons are outclassed by the enemy. Therefore, it would be moral for an engineer to create AI targeting as it would be considered good care.

Whilst a case can be made that AI weaponry should be developed, there exist good reasons why this technology should be adopted with caution at the very least. Advanced targeting technologies which are optimised to kill efficiently greatly increase the rate of deaths; both sides of a dispute implementing such weaponry will result in a greater number of deaths compared to if traditional weapons were used. According to the Principle of Utility, actions which cause the greatest happiness to the greatest number (and therefore the least pain) are moral. AI weapons on the other hand, could be considered immoral as they would take more lives, causing more relatives pain as they mourn their bereavement.

A country developing AI assisted weapons technology, even only with the intent of use for defensive purposes, could cause political tensions, as the military advantage this country would possess would disrupt the balance of power and deterrents. The USA’s “3rd Offset Strategy” provides a clear example of this, seeking to maintain and extend their competitive, technological and operational advantages over the rest of the world. From the perspective of Care Ethics, the countries with inferior weapons will become dependent on the countries possessing AI technology. It would be unethical for a country to develop AI weaponry, as it would make others vulnerable and dependant on its decisions. This is a clear detriment to intercountry relationships.

AI assisted weapon technology directly contradicts the Aristotelian Virtue of courage. A person remotely operating a machine from a safe distance, instead of facing their adversaries on the battlefield in a fair combat is a sign of cowardice, an unvirtuous and thus unethical act.

Additionally, AI weapons would provide an unfair advantage to those countries owning the technology; using Deontology we could argue if one country wanted to develop AI weapons, they’d have to be prepared to live in a world where intelligent weaponry threatened them too. Using the Universality Principle the question should therefore be: ‘would humans want to live in a world where everyone had the right to develop smart weapons, or worse, everyone already had intelligent weapons, with the ability to kill whomever they wanted?’ Deontology would suggest this would result in an undesirable world, thus it is not moral for any country to pursue the development of AI weaponry.

Initial Decision

We think in an ideal world it wouldn’t be moral to develop AI weaponry. However, given that some governments will always seek a military position of power, it would be immoral to prohibit developments that would allow all countries to maintain a deterrent against being attacked.

9 thoughts on “YOU WON’T BELIEVE HOW FAST THE US CAN KILL YOU WITH AI! CLICK HERE TO FIND OUT!

  1. Very interesting article!
    Countries are all competing in developing more advanced weaponry and the best way to do that is to remove human error.
    Imagine a military drone scanning and crossing off targets without a remote human pilot in control. This links to self-driving cars evaluating what the worst outcome could be; crashing and killing the driver or swerving and hitting a group of people at the traffic lights. Do we want AI evaluating how much our lives are worth?

    Developments in other technology as a result of war was briefly mentioned.
    https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html This article highlights some of the differing opinions among tech giants on the issue of AI. Advancements in weapons AI will almost certainly result in a more automated world and brings in the ethical argument of AI putting people out of jobs. It is for this reason that some leading technology companies want to introduce laws to limit AI development. I think this is the best course of action, not stopping but setting limitations in place.

  2. Good job on the article, it was an interesting read.

    I believe that this is a very tricky problem to solve as mentioned above they are multiple pros and cons with each having very severe consequences. However, my initial thought was as modern war is commonly fought in cities surround by civilians this sort of technology could be used in a positive way to reduce the number of civilians caught in the crossfire.

  3. In my view, the key point you make is ‘Advanced targeting technologies which are optimised to kill efficiently greatly increase the rate of deaths’. The development of the described technology would probably result in more wars, people seeing it as the easy way out, that it’s now ‘ok’ because the innocent won’t be killed. I think our focus should be on how to reduce war, if possible completely eradicate it.

    However, my argument is somewhat idealistic. I think that, applying your arguments to the current political situation of the world, the arguments for developing the described technology are much stronger than the arguments for not doing so.

  4. Great article!

    I regret that, once again, technology that could be used (and is used) to benefit humanity may be repurposed for our destruction. Call me and idealist, but the US is failing to show moral leadership on this issue in the name of ‘national security’.

    The US’ flirtation with killer AI is putting us on a dangerous path to the next major Arms Race. And as with nuclear proliferation, I fear that we will be unable to close this Pandora’s box now that we have started to lift the lid.

  5. A thought-provoking article!

    One of the arguments used is that technologies developed for military purposes have subsequently been used in non-military contexts, therefore ‘good can come from the pain of war.’ However, this good outcome is not predicated on war itself but on the allocation of resources to the pursuit of technological development. It is incidental that this allocation of resources should be stimulated by military objectives. Resources could be allocated to the development of AI technology without war being the reason to initiate the allocation.

    The article also assumes a priori the validity of using Aristotelian virtues to determine the moral quality of an action. I don’t agree that actions characterized by wisdom and courage are always virtuous or moral. Just because an immoral endeavour requires ‘wisdom’ in its execution does not justify it or make it moral. Similarly, if someone were to have the ‘courage’ to do something immoral in person rather than doing it remotely, that ‘courage’ would not make the otherwise immoral act moral. Having said that, I do think that the virtue of justice is a useful moral compass. Thus if the use of AI weaponry would aid the conduction of a just war, in the jus ad bello sense (for example, by reducing the number of civilian casualties), that would factor in favour of the use of AI weaponry.

    You argue that on a utilitarian ‘moral balance sheet’, using AI in warfare would increase lethality and therefore reduce pain as deaths would be swifter. You later rightly recognise that, on such a moral balance sheet, this would need to be balanced against the fact that the use of AI would likely increase the number of deaths in warfare. It seems like the latter would outweigh the former, as presumably killing would rank as a greater evil than inflicting pain on such an analysis.

    It seems an additional ethical consideration would concern the ethical values which an AI weapon is programmed to attribute to combatants and civilians. For example, would an AI weapon attribute absolute value to the life or safety of a civilian, therefore prioritizing that over any potential damage which could be inflicted on combatants? If an AI weapon was presented with a situation in which it could launch an attack likely to kill 10 combatants, but which was also likely to kill or harm a civilian, what action would it take? It seems this would depend on the ethical decision of engineers, taken before any combat situation arose.

    Lastly, your initial decision is that ‘given some governments will always seek a military position of power, it would be immoral to prohibit developments that would allow all countries to maintain a deterrent against being attacked’. The fact that some governments will seek to develop advantages in military power (by developing AI weaponry) does not necessarily warrant that all governments should have the freedom to develop AI weaponry to maintain a deterrent against attack. Just because some governments will attempt to produce nuclear armaments, should all countries be permitted to produce nuclear armaments in order to ensure that they all possess a deterrent against being attacked?

  6. Great thinking. Really a tough question that is very important. I think the question is actually much broader than AI and has been seen since the invention of the nuclear missile (although one can argue it goes back a lot further (the invention of gunpowder in 9th century China etc).

    Personally, I would say Hiroshima and Nagasaki was a morally right thing to do. This is because it ended the war and based on the Principle of Utility (though you used this argument against this argument) causes more good for more people as I would argue it stopped the war and thus saved many more lives that it caused.

    However, it is a risky business and I totally accept it can fall into the wrong hands. This can be devastating but is only a risk. There isn’t much to counteract this argument other than it is a risk. I would ask the question: who is to argue who is the evil people as most often the evil see themselves as good.

    This is a tough question and there is always gonna be differences. I would say the immediate need to help people who are being persecuted outweighs the risk that ai gets used badly. Although this is just my opinion, there are many different arguments!

  7. In an ideal world, where everyone lives in peace and harmony, and where no conflicts occur, it would be possible to not have any weapons or defence systems at all. However, the real world is full of conflict and strife, it’s not a case where we can all join hands and sing “Kum bah yah”. Countries must be able to defend themselves and the people who are under their care.

    The tiny island nation of Singapore is well aware of this fact. Although it maintains good diplomatic relations with most, if not all, of the countries of the world, it is keenly aware of the fact that it cannot take these for granted and that no one owes Singapore a living. Singapore and Singaporeans have to find their own way to survive and prosper, turning challenges into opportunities. This is enshrined in its core values. Singapore continues to maintain mandatory conscription as a form of deterrence.

  8. My initial reaction is to not develop the technology because it can kill more people more quickly which is horrifying. But with more thought the technology is going to be developed anyway so it would be best I think for all nations/powers to develop it in order to try and keep a status quo and not let one nation be in total control.

    Unfortunately this will probably lead to higher death rates in war (which is a sign that humanity is not getting better as a race imo) and my thought is ‘what’s stopping people from making this technology target civilians as well?’

    In summary I think it’s bad and it’s going to get worse.

  9. Excellent! I really liked the ethical reasoning you provided, particularly when you looked at the virtue of courage.

    This is a tricky one (which makes it an excellent dilemma), advances happen, and are encouraged in all fields of human endeavour and warfare is one of them. Yet, after the advance is made, there is limited time where this advantage can be exploited before the other side catches up. In the early 20th century Britain’s development of the dreadnought led to Imperial Germany’s navy catching up since they focussed production on dreadnoughts too. Now the US have AI in their military, other nations will develop it too.

Leave a Reply