Robot Face Artwork

The Robuttal: The Ethics Of Enforcing Ethics

Group 18

Is the engineering of machines that enforce third party ethics a problem and should it be stopped?

The concept of machines making decisions for themselves seems pretty daunting. Thankfully, the robots we’re starting to see, from the Roomba™ to the autonomous car, are only programmed to make decisions within a predetermined set of parameters.

Faced with the problem of a wall to the left and a chair straight ahead, the robot vacuum is able to interpret this data and calculate that the correct decision is to turn right. What happens in a morally difficult situation, like a car crash or the trolley problem? The robot becomes a machine that enforces a specific ethical framework on to the situation, that framework being an extension of the original programming of the machine.

Instead of being in control of our own moral actions, we find ourselves tied to the tracks with a machine stood behind the lever. This machine is only acting within the framework of its programming, so the ethics of a third party are being enforced on our situation. The questions arise: Is this ethical, and should it be prohibited?

It shouldn’t be prohibited

Let’s start with the positives of this scenario. When a machine’s firmware is being designed and programmed, it is likely that more than one person is responsible. When a team of individuals work together to decide how a machine should behave in a crisis, a whole range of moral predispositions are likely brought to the table. While many options may be considered, a reassuring outcome is that, ultimately, the result will be a democratically determined ethical framework for the machine to operate under.

Furthermore, programmed morality allows for the introduction of ethical legislation on an organisational level, taking the ethical management responsibilities out of the hands of the designer and into the hands of a company or government. Such a body’s priority would be to present themselves as safe and trustworthy by protecting their stakeholders and avoiding a macro-ethical failure. Moral decisions backed by the majority of people can certainly be construed as ‘optimum’, as is represented by moral prescriptivism; the shared and prescribed code of ethics in society,

Is having the freedom to make your own choices in a life or death situation worth compromising the safety of all involved? It’s arguable that it isn’t. In the sci-fi film I-Robot, Will Smith’s character is resentful of a robot for saving his life in a car crash when it could have saved a young girl. This example of a machine imposing third-party ethics is poignant in it’s examination of micro-ethics but neglects the fact that, without the machine, both people would have died. This highlights the negative ethical implications of rejecting programmed morality completely. If a user chooses to avoid technology like this, lives are risked in favour of ethical freedom as engineers compromise the effectiveness of potentially life saving machines so as to avoid ethical complications. In the light of a reality where both Will Smith and the young girl are killed, this could seem selfish and unethical.

Furthermore, what if a psychopath is in control of such a machine?

Artificial Intelligence Trolley Problem
Click to Enlarge

Would it be acceptable for an engineer to design a machine in such a way as to prohibit its use by unethical people? Although it means resorting to moral realism, society would likely agree that this might be the right thing to do. A car that shuts down before a dangerous road rage occurs or a gun that refuses to fire during a mass shooting would impose moral frameworks on the users against their will, but only when the users’ own moral framework was warped and out of sync with the rest of society. From a utilitarian standpoint, removing ethical anomalies from society that would otherwise cause great suffering is easily justified and mirrors the moral prescriptivism enforced by the police which society already accepts.

It should be prohibited

Unfortunately, as technology progresses, so does the ability of those wishing to exploit it. An ethics enforcing machine brings with it the inherent vulnerabilities of a digital system, vulnerabilities that are not associated with a system under human control. In vehicles specifically, this places the security of the occupants and other road users in question.

The core principle here is that the existence of these machines now enables those with poor, misguided morals to enforce them unwillingly upon others. The removal of ethical anomalies mentioned previously is flipped on its head when anyone can tamper with the ethics in play.

If we consider the potentially fatal cocktail of:

…it becomes clear that the risk associated with a malicious attack is significant and should not be ignored. It could be argued that the existence of ethics enforcing machines that are inherently vulnerable to manipulation and exploitation is, in itself, unethical.

The intrinsic abuse of personal, ethical freedom takes this argument further. Inescapably, whether they are aware of it or not, users of ethics enforcing machines have their own moral liberties removed. When potentially faced with a life or death situation, a users personal response and individual moral framework is unconsidered. It’s arguable that every human has a right to enact their own morals in any situation, and it may be considered an infringement of basic human rights to revoke this choice.

The conclusion

Depending on the ethical framework we employ, the conclusions we draw will likely differ. From a hedonistic perspective, does existence of ethics enforcing machines bring more pleasure overall, even considering the potential risks involved? From an instrumentalist standpoint, could it be that the development of such machines is justifiable given that the developer is not making the decision to cause harm?

Contemplating the arguments discussed here, two options present themselves. Do we permit the existence of ethics enforcing machines via ethical justification or do we refuse to develop them.

What would you do?

Would you robut them?

32 thoughts on “The Robuttal: The Ethics Of Enforcing Ethics

    1. Thanks for the comment, you’re right terrorism is a large factor for these systems as a hacker will target them as they could bring an entire country to its knees if they manage to get into it.

  1. Great article, I find it really interesting how a democracy is likely used to determine the ethical framework for the machines, but still there will be a diconnect between the users’ ethical morality and the machines.

    1. Even with a democratic ethical framework it won’t include everyone. Just because it’s the most common view doesn’t make it the only view and that would still single out people who don’t agree with the majority.

    2. Thank you for your comment. This is a good point. It was suggested that the democratic determination would likely have to involve a board of stakeholders representatives. That is to say that the actual end users would have to represented as well as the designers. Perhaps this leads well into local or national authorities furthering their role of representatives of the people. If a selection of political representatives, such as MPs in the UK, where to have a say in the development, this could allow a greater degree of connection between the users morals and the machines.
      Callum – Co-author

  2. “This machine is only acting within the framework of its programming, so the ethics of a third party are being enforced on our situation.” – I suppose we are also programmed with a ethical code to begin with. However, we can also question it and refine it. Perhaps robots will also have this capacity. However, would this move them closer to being sentient beings?
    I reminded of Asimov’s Laws of Robotics here.
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
    0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    Of course, these laws provoke the discussion are they robust (I hope that pun is forgiven) and appropriate? Are we at a stage with robotics where we need to think about implementing them or a variant?

    The issue is really are we at a stage where a machine can make a decision that could cause harm. With driverless cars having extensive trials, we can say we are at that point, therefore your article is very pertinent. Yet this also means, in a way, that the decision has been made for us? The technology, and the knowledge of that technology is there. So asking should we prohibit it, may simply mean the technology goes to ground. Perhaps it is better to keep it legal so that we can regulate it better?

    1. Thank you for your comment, you make some good points about the current state of the AI environment. However, I believe that Asimov’s three laws aren’t currently being implemented. With the recent Tesla crash the car allowed a human to come to harm through inaction, violating the first law. The car new it was going to crash and even gave the driver warnings and still didn’t do anything. I wonder if the next step for AI is to make Asimov’s Laws law when developing AI for any purpose.

  3. Excellent article well thought out and balanced.
    My comments are as follows;
    Whose moral and ethical framework will be used. Eg American, European, Middle Eastern, Russian.
    The Prohibit arguement is no longer a valid option as this technology is already being manufactured the best we can hope for is international cooperation in the registration and regulation of research development and implementation of AI.
    A major concern is the fallibility of the technology to hackers as covered in the article.
    The problem with increasing use of robots in the production line is that they don’t buy the final product. Eg cars.

    1. Thanks for the comment. Yes these systems are being developed but the regulations around them sort of need to catch up. The comment on robots not buying cars is correct and should be taken into careful consideration. Humans aren’t efficient which is bad for business but robots are and never take a day off. This will need tooling at for large scale systems.

    2. Thank you for your comment. The issues of cultural and national differences is an interesting point. I suppose it would require some form of international accord, or agreement, on the development of these frameworks. But this in of itself would likely prove difficult. What may be considered morally acceptable in one part of the world may not be considered morally acceptable in another. So, ultimately this could present a significant road block in the development of these frameworks, and the sale of machines operating them, across borders.
      Callum – Co-author

  4. Great article, an enjoyable read!

    I’ve never considered these issues when considering our future. The question is when these robots are made or when autonomous cars are fully implemented, who’s ethical framework will become the defacto one? Also, could an inmoral government use the robots capability to store a set rule of ethics and reprogram it to control it’s citizens with it or subtly control it through regulations on ethic frameworks required on robots?

    Personally I agree that robots and technology should have inbuilt/programmed ethics but the security of this code should be in read only memory and not facilitate reprogramming, as to avoid those issues. However, a more important issue is who and how will these programmed ethics be regulated and how will people agree on the subtleties of them as we all think and act differently?

    1. Thanks for the comment, the question of how will these be regulated is a good one, systems like this have so much under the surface it’s impossible to know how they truly function without having helped develop them in the first place. When deciding on a framework to follow not everyone will be happy, selfish people will always protect themselves over anyone else. Although not in the best interests of humanity self preservation is ethical and it’s wrong of someone to say otherwise.

  5. Great article, very much in the zeitgeist given the recent autonomous car accident in the USA. As per DrPatrickJS’s comment the genie is already out of the bottle. Driverless cars will (should!) already be programmed to do the least worst thing and so there are programmers who are already making those decisons. In this type of situation the technical world turns to regulation in the form of standards – the purpose of which is to harmonise the safe function of things from kettles to nuclear power stations. Robots would then have to prove that their code made ‘standardised’ ethical choices before it could be licensed to operate. The job of deciding such robot ethics would then fall to a multinational group of individuals to negotiate between them. No, it’s not a perfect system – standards committees operate at glacial pace but it would at least establish a framework to guide programmers rather than leaving them to their own devices.

  6. A thought provoking read.
    For me, a key concern, as mentioned in the article, would be the removal of our liberty to make our own choices. I would also add however – does this not open up the gateway for the powers at be to enforce what they consider to be their own ethics after the AI vehicle has already been made and sold. An example that comes to mind is speed limits. Although there is little argument that some speed limits are enforced for good reason and that there are laws against exceeding them, it is another question entirely if suddenly we had absolutely no personal choice over what speed we drive at. Or would they be able to delay our journeys by halting our vehicles at certain points for what they consider to be “safety reasons” even when they are not the ones actually in the situation. Further, having studied Asian cultures, I would think that cultural differences across the world could be a barrier against normalised ethics.

    1. Thank you for your comment. The issue you raise about the continued control of ethics by authorities is something we hadn’t considered and we were very intreged by this. We believe there would have to be some kind of agreement or standard regulating exactly what third parties, such as authorities, can do once the frameworks are in operation. Perhaps a model of rules that follows the same form of privacy laws, dictating the balance between security and liberty when it comes to data effecting users.
      Callum – Co-author

  7. Yeah with any system you can hack it but it doesn’t stop us using them. Facebook, Twitter, the NHS use big data systems. But these systems have made life more efficient and mean less data is lost. Looking at the positives with a more efficient roads, less accidents and shorter journey times these systems can’t get here soon enough with all the stupid and wreckless drivers on the road at the moment. These AI systems have the big advantage of they won’t get something wrong and they can deal with large amounts of data at once and do quick calculations. The ethics of it all would be to help as many people as possible.

    1. That’s true, we do use all manner of things that could be hacked in every day life without thinking about it. At the moment though, none of those things have the ability to kill me or make a life or death decision for me!

      I suppose I’ve been fine up to this point, trusting that the technical competency of the people who made my car is enough to save my life. I’m not sure I can assuredly say as much of their moral competency!

      Not for me I’m afraid, I’m keeping my ethics in my own hands!

      1. The ethics though will try and save people whereas some drivers do go out trying to kill others. Hacking such large systems will get resolved very quickly. When the NHS had ransomware it was sorted very quickly by hiring an ex-hacker who looked at the code and worked out how to stop it. With a system like this a team of ex-hackers will be used to keep everyone safe and make sure that anything trying to get in is stopped before they do any damage. You can never be sure of other people ethics or whats going on in their life, so this sort of system will remove the ethical anomalies as it says in this article.

  8. This was pretty eye opening! It hadn’t really occurred to me that these sorts of ethical decisions would have to be made as the car/gun/life-saving robot was built by a random engineer.

    I think this may well be a hot topic in years to come and legislation on “ethical engineering” starts to become essential.

    I personally found the comparison you made to the I-Robot scenario to be particularly helpful. I agree that we can’t truly call ourselves ethical if we don’t adopt this technology if it could save lives.

    I am not going to be an early adopter however. If this occurs in my lifetime, I will be slow on the uptake. The idea of an engineer somewhere making an ethical decision for me doesn’t sound too great!

  9. A good read and a really interesting article!

    I think the crux of the argument falls on the question of whether the ethical framework enforced by the machine is an actual representation of the morals of the designer or only an extension of the machines programming. The former certainly comes with its own problems – what right does that one person have to enforce that specific ethical framework, democratic or otherwise? Conversely, the latter is potentially even more sinister – If the robot was not created with a specific ethical framework installed, how can we be sure that it’ll behave in a way that is beneficial or morally sound at all!

  10. I think it’s terrible that robots that could one day be very common in the world might behave in a way that might bring harm to people, especially if designers aren’t thinking about the ethics of the machine at all when making it! It’s just not fair if my opinion isn’t considered if I’m using one of these machines, because I might want to act completely differently to how the robot decides to do it!

    I think robots like this are dangerous and unethical, so future development of them should be banned now before it’s too late!

  11. I’d imagine every robot would be programmed to operate from a utilitarian standpoint, but as the trolley problem shows us, it’s not always that straightforward. I’d agree with a lot of commentors here that that cat is out of the bag so to speak, but I think the future will be full of cases where people will blame designers for murder via action or inaction of their machine. Either way, when people’s lives are at stake, the engineer will never win.

  12. A great read! I definitely think that there needs to be some sort of legislation put in place to govern the ethical frameworks that are programmed into autonomous cars, but the hard part will be determining the most ethical way to choose that ethical framework! (or even the ethics of choosing the people to choose the ethics!) It gets very complicated very fast! There’s no simple solution, but we definitely need to begin regulating these machines before things start going really wrong.

  13. Very interesting, but I don’t think there’s a question here. Robots will always be better than humans at making decisions. Morals shouldn’t come into it. The iRobot example is good, and the robot definitely did the right thing in that situation, because it could process all the data and make a more informed decision. If we all behaved like robots when it came to safety, there would be far fewer deaths on the roads and elsewhere.

  14. A very insightful article and a very interesting ethical dilemma. I think that as long as the robot isn’t manipulated by humans for their own personal gain then robots overall will make better judgements than humans. The robots will make quicker and better decisions than humans, which i think in the car scenario will result in less collisions and less deaths.

  15. Thoughts…

    The team that ‘do’ the programming are likely to share a similar cultural, social, political and ethical environment so perhaps there is not a broad enough range of moral dispositions.

    I have concerns around the removal of the ‘free will’ individuals possess and operate within. This ‘free will’ morality is perhaps both programmed and innate and informs our decisions in the everyday choices we face like whether to have jam or marmalade on our toast and those big once in a lifetime decisions about swerving our car to avoid hitting a young mother and her baby or veering towards the old man. In the same way that we have no guarantee of human decisions being selfless, compassionate, noble or generous, perhaps we can at best program machines to act in the least damaging manner. Whilst few would disagree that the programming of a gun to refuse to fire in the hands of a terrorist maybe the best we can hope for, in other scenarios it could be that the machine is programmed to act in a purely random fashion, making those decisions that cannot be agreed morally by the majority of individuals.

    Other questions then emerge:

    “Will machines become more moral than humans?”

    “Will machines develop a conscience?”

    “Who will then offer counselling to the robots?”

  16. A thought provoking read, a real world trolly problem, however unlikely, is an issue and one which will need careful planning and thought.
    In my opinion removing people ethics in judgement can only lead to worse things, it starts by helping but when does it stop.
    Humans do make mistakes which help us grow but at least those mistakes can have blame placed on someone whether that be due to a bad day at work or a simple lapse in concentration.
    With AIs who is to blame when something goes wrong, its not a matter of if it goes wrong all systems do.
    If a car is self driving who pays the insurance and road tax cost, you don’t pay insurance for a taxi ride.
    Would companies be willing to pay out for this sort of innovation if they have to pay for everything and remove the ownership of a vehicle.
    Would this be a way for governments to control people more making them use the routes with more business advertising to increase sales instead of a scenic route through the countryside.
    This raises a lot of questions and I’m going to think a lot harder when i hear about such systems. Great read.

    1. Thanks for the comment. You raised some interesting points, for insuring cars with AI you’d hope it would be cheaper as its said a self driving car is less likely to have a crash. UBER was running a few self driving cars as a taxi service so that could be the way it goes a company has a fleet of cars that are insured and then charge people to use them just as taxis. This raises another question which is what will happen to all the taxi drivers? that would be thousands of low skill jobs gone replaced by an more efficient and cost effective robot. I think we are getting to a tipping point which we need to be very careful of.

  17. looking at iRobot it does show what could happen if we aren’t careful how we implement these systems. the scene were Will Smiths character describes an advert for the company shows where we could be, robots don’t need to be paid, are faster, don’t complain and con work 24/7 without needing breaks. they are a businessman’s dream a workforce that will always make them more money without thinking about the consequences this could bring.
    Is a second industrial revolution coming with people fighting back again these systems to keep their jobs and ethics?

  18. This is a brilliant summary of an inherently complex debate. I think something that is often missed in this discussion is social responsibility. Although there is a societal obsession with progress, and a general expectation that more efficiency is always better, the cost of human progress should always be considered. Surely we cannot just ride a wave of progress, for progress sake, without consulting how the benefits and negatives will be distributed among society? However, this consultation will have to be interdisciplinary, looking at the sociological and philosophical ramifications of robots.

  19. This is not just a problem in cars, AI systems for say traffic could decide to slow everyone down as that is a safer way to travel as if you do crash you wont be hurt. How far will these systems go, many people have things like Alexa and Google home that constantly listen to what you’re saying and if that information can be used by these sorts of systems it will open up a whole new world to privacy and what can be hacked. A good and interesting read and one which will always have arguments on both sides to consider.

  20. Hmm, very interesting take on this issue. It seems that this technology could do more harm than good really if the car was to be hacked by someone. This could allow more targeted attacks on people if you can get into the system and find a specific car and get it to kill the driver.What id like to know is what are the regulations now, I know some companies are doing driverless trials but what stops people going outside these areas with their car. The comments made on whos ethics to use does raise a different question will a car be geolocked to a specific area where its ethics are valid or within government regulations. This could mean a world wide agreement on an already complex issue and could it mean a change for how people drive. would a driverless car know to drive on the opposite side of the road when it passes into another country like france, or would it know it can turn right on a red light at a junction if it was in America? these are just the ones i’m aware of but i’m sure there are many other little aspects of driving in different countries that wouldn’t be seen here in the UK or other parts of the world.

Leave a Reply