YOU WON’T BELIEVE HOW FAST THE US CAN KILL YOU WITH AI! CLICK HERE TO FIND OUT!

Group 48

The US Army recently had to assuage fears over their so called ‘killer robots’ after plans to upgrade their current Advanced Targeting and Lethality Automated System (ATLAS) in ground combat vehicles’ weaponry emerged. The Defence Department aim to use ATLAS alongside tank crews to “acquire, identify, and engage targets at least three times faster” than is currently possible.

‘Artificial Intelligence’ (AI) weaponry is also presenting itself as a dilemma in a much broader context. As many countries continue to develop technologies, it is key to question the ethics of militaries adopting AI for their use.

Multipurpose Unmanned Tactical Transport used by the US

Death is inevitable in war, but death can either be relatively painless or mean slowly bleeding out on a battlefield. The US defence secretary Jim Mattis stated a core aim of the US military is to be as “lethal” as possible. AI weaponry has a greater lethality than is conventional; it can decrease the intensity and duration of pain before death, showing it can be moral using a Utilitarian ‘moral balance sheet’.  This idea can also be affirmed from Deontology using the categorical imperative that you shouldn’t doom someone to a slow painful death as you yourself would not want it.

In the past it can be seen that militaries have developed technologies to use in (possible) warfare which have then had purposes away from war – as shown by the US military developing nuclear reactors. This shows there is a chance that good (pleasure) can come from the pain of war, in the form of improving people’s lives. This good could encompass the lives of many more people than the pain of war. From a Utilitarian perspective, it can then be considered that the balance of good against bad in the ‘Hedonistic Calculus’ is tipped to be positive due to the extent of the good that ensues from innovation.

Furthermore, this idea of driving technology can be seen as virtuous, since Aristotle (the founder of Virtue Ethics) believed that there are intellectual virtues such as wisdom, the pursuit of which is fundamentally moral.

The virtue of justice is key to both Aristotle and the list of Cardinal Virtues. This can be used to justify that using weapons which can accurately kill the enemy and therefore end a war quicker can be virtuous and therefore moral if the aim for the war is just, such as to free civilians from oppressive rule.

AI has also begun to be developed for advanced targeting systems through machine learning, finding the ‘right people to kill’. The US government currently regulates AI using Directive 3000.09, which may need to be amended if there is progress into “Explainable AI”. AI weapons which have these targeting systems would decrease the chance of civilian casualties. This links with Just War Theory, which splits into whether it is just to go to war (Jus ad Bellum), covering the reason and intention for war, and the just way to act in war (Jus ad Bello), which states that there needs to be a distinction so only other combatants, not civilians, are targeted. Therefore, AI targeting systems can be classed as moral in war if used in this way.

An engineer developing weapons and defense would have a relationship with the soldiers in the military. It is the engineer’s duty to create technology which helps the soldier. From the standpoint of Care Ethics, soldiers are dependent on the engineer for technology which can decrease the danger they are in, and are vulnerable if their weapons are outclassed by the enemy. Therefore, it would be moral for an engineer to create AI targeting as it would be considered good care.

Whilst a case can be made that AI weaponry should be developed, there exist good reasons why this technology should be adopted with caution at the very least. Advanced targeting technologies which are optimised to kill efficiently greatly increase the rate of deaths; both sides of a dispute implementing such weaponry will result in a greater number of deaths compared to if traditional weapons were used. According to the Principle of Utility, actions which cause the greatest happiness to the greatest number (and therefore the least pain) are moral. AI weapons on the other hand, could be considered immoral as they would take more lives, causing more relatives pain as they mourn their bereavement.

A country developing AI assisted weapons technology, even only with the intent of use for defensive purposes, could cause political tensions, as the military advantage this country would possess would disrupt the balance of power and deterrents. The USA’s “3rd Offset Strategy” provides a clear example of this, seeking to maintain and extend their competitive, technological and operational advantages over the rest of the world. From the perspective of Care Ethics, the countries with inferior weapons will become dependent on the countries possessing AI technology. It would be unethical for a country to develop AI weaponry, as it would make others vulnerable and dependant on its decisions. This is a clear detriment to intercountry relationships.

AI assisted weapon technology directly contradicts the Aristotelian Virtue of courage. A person remotely operating a machine from a safe distance, instead of facing their adversaries on the battlefield in a fair combat is a sign of cowardice, an unvirtuous and thus unethical act.

Additionally, AI weapons would provide an unfair advantage to those countries owning the technology; using Deontology we could argue if one country wanted to develop AI weapons, they’d have to be prepared to live in a world where intelligent weaponry threatened them too. Using the Universality Principle the question should therefore be: ‘would humans want to live in a world where everyone had the right to develop smart weapons, or worse, everyone already had intelligent weapons, with the ability to kill whomever they wanted?’ Deontology would suggest this would result in an undesirable world, thus it is not moral for any country to pursue the development of AI weaponry.

Initial Decision

We think in an ideal world it wouldn’t be moral to develop AI weaponry. However, given that some governments will always seek a military position of power, it would be immoral to prohibit developments that would allow all countries to maintain a deterrent against being attacked.

29 thoughts on “YOU WON’T BELIEVE HOW FAST THE US CAN KILL YOU WITH AI! CLICK HERE TO FIND OUT!

  1. Very interesting article!
    Countries are all competing in developing more advanced weaponry and the best way to do that is to remove human error.
    Imagine a military drone scanning and crossing off targets without a remote human pilot in control. This links to self-driving cars evaluating what the worst outcome could be; crashing and killing the driver or swerving and hitting a group of people at the traffic lights. Do we want AI evaluating how much our lives are worth?

    Developments in other technology as a result of war was briefly mentioned.
    https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html This article highlights some of the differing opinions among tech giants on the issue of AI. Advancements in weapons AI will almost certainly result in a more automated world and brings in the ethical argument of AI putting people out of jobs. It is for this reason that some leading technology companies want to introduce laws to limit AI development. I think this is the best course of action, not stopping but setting limitations in place.

    1. Thank you very much! I think that’s really helpful! I agree; I’m really not convinced that AI will do a good job of evaluating either the value of human life or who is actually an enemy.

  2. Good job on the article, it was an interesting read.

    I believe that this is a very tricky problem to solve as mentioned above they are multiple pros and cons with each having very severe consequences. However, my initial thought was as modern war is commonly fought in cities surround by civilians this sort of technology could be used in a positive way to reduce the number of civilians caught in the crossfire.

    1. Harold, you make a good point. Wars these days are often fought in densely built-up areas, aren’t they? So I see how it could reduce the death toll by being more selective in killing.

  3. In my view, the key point you make is ‘Advanced targeting technologies which are optimised to kill efficiently greatly increase the rate of deaths’. The development of the described technology would probably result in more wars, people seeing it as the easy way out, that it’s now ‘ok’ because the innocent won’t be killed. I think our focus should be on how to reduce war, if possible completely eradicate it.

    However, my argument is somewhat idealistic. I think that, applying your arguments to the current political situation of the world, the arguments for developing the described technology are much stronger than the arguments for not doing so.

    1. Yes, if only we lived in an ideal world. It’s sad that we don’t, but Jesus gives us a certain hope that one day all the suffering will be a thing of the past. There won’t be wars in the New Creation! But yes, for now, focusing on reducing warfare would the best thing. The problem is that as long as there are people in the world, there will always be hatred and war. As G.K. Chesterton once wrote in response to the question ‘What’s wrong with the world today?’ “Dear Sir, I am. Yours, G.K. Chesterton.”

  4. Great article!

    I regret that, once again, technology that could be used (and is used) to benefit humanity may be repurposed for our destruction. Call me and idealist, but the US is failing to show moral leadership on this issue in the name of ‘national security’.

    The US’ flirtation with killer AI is putting us on a dangerous path to the next major Arms Race. And as with nuclear proliferation, I fear that we will be unable to close this Pandora’s box now that we have started to lift the lid.

  5. A thought-provoking article!

    One of the arguments used is that technologies developed for military purposes have subsequently been used in non-military contexts, therefore ‘good can come from the pain of war.’ However, this good outcome is not predicated on war itself but on the allocation of resources to the pursuit of technological development. It is incidental that this allocation of resources should be stimulated by military objectives. Resources could be allocated to the development of AI technology without war being the reason to initiate the allocation.

    The article also assumes a priori the validity of using Aristotelian virtues to determine the moral quality of an action. I don’t agree that actions characterized by wisdom and courage are always virtuous or moral. Just because an immoral endeavour requires ‘wisdom’ in its execution does not justify it or make it moral. Similarly, if someone were to have the ‘courage’ to do something immoral in person rather than doing it remotely, that ‘courage’ would not make the otherwise immoral act moral. Having said that, I do think that the virtue of justice is a useful moral compass. Thus if the use of AI weaponry would aid the conduction of a just war, in the jus ad bello sense (for example, by reducing the number of civilian casualties), that would factor in favour of the use of AI weaponry.

    You argue that on a utilitarian ‘moral balance sheet’, using AI in warfare would increase lethality and therefore reduce pain as deaths would be swifter. You later rightly recognise that, on such a moral balance sheet, this would need to be balanced against the fact that the use of AI would likely increase the number of deaths in warfare. It seems like the latter would outweigh the former, as presumably killing would rank as a greater evil than inflicting pain on such an analysis.

    It seems an additional ethical consideration would concern the ethical values which an AI weapon is programmed to attribute to combatants and civilians. For example, would an AI weapon attribute absolute value to the life or safety of a civilian, therefore prioritizing that over any potential damage which could be inflicted on combatants? If an AI weapon was presented with a situation in which it could launch an attack likely to kill 10 combatants, but which was also likely to kill or harm a civilian, what action would it take? It seems this would depend on the ethical decision of engineers, taken before any combat situation arose.

    Lastly, your initial decision is that ‘given some governments will always seek a military position of power, it would be immoral to prohibit developments that would allow all countries to maintain a deterrent against being attacked’. The fact that some governments will seek to develop advantages in military power (by developing AI weaponry) does not necessarily warrant that all governments should have the freedom to develop AI weaponry to maintain a deterrent against attack. Just because some governments will attempt to produce nuclear armaments, should all countries be permitted to produce nuclear armaments in order to ensure that they all possess a deterrent against being attacked?

    1. voteforpedro, I think you raise some really good points! Thanks very much. You rightly make the point that technological advancement off the back of war is merely incidental, due to the allocation of resources into research. Why do we need war to further technology? We should focus on allocating resources more suitably.

  6. Great thinking. Really a tough question that is very important. I think the question is actually much broader than AI and has been seen since the invention of the nuclear missile (although one can argue it goes back a lot further (the invention of gunpowder in 9th century China etc).

    Personally, I would say Hiroshima and Nagasaki was a morally right thing to do. This is because it ended the war and based on the Principle of Utility (though you used this argument against this argument) causes more good for more people as I would argue it stopped the war and thus saved many more lives that it caused.

    However, it is a risky business and I totally accept it can fall into the wrong hands. This can be devastating but is only a risk. There isn’t much to counteract this argument other than it is a risk. I would ask the question: who is to argue who is the evil people as most often the evil see themselves as good.

    This is a tough question and there is always gonna be differences. I would say the immediate need to help people who are being persecuted outweighs the risk that ai gets used badly. Although this is just my opinion, there are many different arguments!

  7. In an ideal world, where everyone lives in peace and harmony, and where no conflicts occur, it would be possible to not have any weapons or defence systems at all. However, the real world is full of conflict and strife, it’s not a case where we can all join hands and sing “Kum bah yah”. Countries must be able to defend themselves and the people who are under their care.

    The tiny island nation of Singapore is well aware of this fact. Although it maintains good diplomatic relations with most, if not all, of the countries of the world, it is keenly aware of the fact that it cannot take these for granted and that no one owes Singapore a living. Singapore and Singaporeans have to find their own way to survive and prosper, turning challenges into opportunities. This is enshrined in its core values. Singapore continues to maintain mandatory conscription as a form of deterrence.

  8. My initial reaction is to not develop the technology because it can kill more people more quickly which is horrifying. But with more thought the technology is going to be developed anyway so it would be best I think for all nations/powers to develop it in order to try and keep a status quo and not let one nation be in total control.

    Unfortunately this will probably lead to higher death rates in war (which is a sign that humanity is not getting better as a race imo) and my thought is ‘what’s stopping people from making this technology target civilians as well?’

    In summary I think it’s bad and it’s going to get worse.

  9. Excellent! I really liked the ethical reasoning you provided, particularly when you looked at the virtue of courage.

    This is a tricky one (which makes it an excellent dilemma), advances happen, and are encouraged in all fields of human endeavour and warfare is one of them. Yet, after the advance is made, there is limited time where this advantage can be exploited before the other side catches up. In the early 20th century Britain’s development of the dreadnought led to Imperial Germany’s navy catching up since they focussed production on dreadnoughts too. Now the US have AI in their military, other nations will develop it too.

  10. Surely rather than using this sort of technology purely for killing, it could be used to incapacitate attacker? For developing technologies like this, it’s difficult to say that it would definitely not cause death on both sides – who’s to say that the technology could not be hacked or a glitch could occur causing it to target the ‘wrong’ people? Why not just incapacitate them in order to avoid more blood being spilled?

  11. A very challenging moral issue as well. An idealistic view would be that we should be working towards reducing the death toll of war, rather than simply making the deaths more efficient, as this technology seems to look towards. However as is addressed in the article, it may in some cases be positive for those who need to be able to deter attacks in a defensive manner.

  12. Developing technology such as AI weaponry may be unethical for the countries which do not have it, but it can also be very beneficial in maintaining power balance.
    Whether or not these improvements do not hold up to ancient moral tests, the fact remains that countries at war often do not fit the very same frameworks.
    The possibility that I personally see in replacing human soldiers with machines is greatly reducing the impact of armed conflicts from each participating side. Potentially countries can focus on training engineers to keep up with the “war” grade technology of its rivals rather than continuously sending troops to their ends.
    A future where human resources aren’t spent on directly fighting and killing others will, in my opinion, shift the focus more on the development of such technology, which in turn would trickle down to the general population in lifestyle improvements.

    1. You raise a very interesting point. If AI weaponry is so good, all soldier (being inferior) would be replaced. The nature of wars would completely change, and with it, a number of other factors. You mention one, the development of new technologies by engineers; their performance essentially would decide the outcome of a war, as their inventions would win or lose the battles. There will still be winners and losers, but the bloodshed will be replaced with a spectacle of computers with weapons shooting each other.

  13. Interesting articles and you are right about the arising dilemma of AI weaponry. However let’s not forget that there has been much less conflict since the invention of nuclear weapons and maybe military AI would prove to be so destructive that there will be no wars in the future out of fear of the AI.

    1. Juan, thank you for your comment. And yes, I too believe that the more destructive a weapon is, the more off-putting it is for governments to start wars, due to the pure fear of the total destruction of all parties involved, or even worse, the destruction of only their military, while the contenders’ are intact. That possible humiliation is probably a thing every general/person in power considers.

  14. I agree with your point, ideally we shouldn’t be making AI weapons at all, but there is no way to guarantee all countries will comply with this moral code. As for the point my brother Juan is making above, I think it’s dangerous to compare AI weapons to nuclear weapons as the destructive power of nuclear weapons is a lot more apparent. As such, it is a lot easier for a country to secretly use AI technology as an edge on their military systems without the risk of mutually assured destruction that nuclear weapons pose.

    1. Thank you for the comment. I too believe that a perfect world would not exist where all countries follow the rules. In such a world rules would become meaningless at one point. But when it comes to technologies that have the potential to kill, great caution has to be taken to ensure that all the laws are properly formulated.

      Regarding the parallel with atomic weapons, I believe AI weaponry can be very devastating as well, especially if used in highly populated areas, but I have to agree that the scale of destruction of an atomic weapon is like no other. Yet still humanity can deem a technology too dangerous, even if it doesn’t come close to the severity of atomic weapons.

  15. This article is very thought provoking. I am intrigued by the scope of efficiency that the AI could capture but there is a certain risk of increased death rates when applying these weapons in the field. I am not sure if it would be great as a secret weapon in the hiding which is mainly existing for show of military power like the US does with nuclear weapons. I certainly hope it is just for show of power. However, a sergeant would want little harm to come to his soldiers, hence might push the government to allow use of ai technology to reduce harm to soldiers on the field in the same way drones are used at the moment.

  16. The conclusion of the article really sums up the issue well. We can argue the morals of developing such technology and whether is should be used all we want, but there will always be someone in the world who is not concerned with the morals and will just do it. There obviously should be regulations, because the line between “deterrent” and something used for oppression can be thin, but I don’t think full on prohibition is something that can be seriously considered, it would be and irresponsible thing to do.

    1. KVtheProphet, I see your point. The argument that you make that regulations are necessary, but should not extend to full prohibition is, in my opinion, on point. The extend of the regulations in such a dilemma is key for the outcome of future events.

  17. Very interesting article. The idea of AI weaponry is both terrifying and intriguing.

    Though I can see the positive motivations as to why people would want to develop AI targeting systems, we already live in a world where tyrants and dictators have access to Weapons of Mass Destruction. Depending on how the AI system is trained to recognise a ‘combatant’ could lead to devastating consequences. The morals of developing a AI weaponry could be argued endlessly, but at the end of the day, the only thing that matters is the morals of the country or person developing the system.

    1. I agree with you. Weather a government/dictator abides by an ethical code will determine if the AI technology follows the same principles. Even Azimov’s rules of robotics are obsolete if the creator decides their invention should be able to breach them.

Leave a Reply