Is the engineering of machines that enforce third party ethics a problem and should it be stopped?
The concept of machines making decisions for themselves seems pretty daunting. Thankfully, the robots we’re starting to see, from the Roomba™ to the autonomous car, are only programmed to make decisions within a predetermined set of parameters.
Faced with the problem of a wall to the left and a chair straight ahead, the robot vacuum is able to interpret this data and calculate that the correct decision is to turn right. What happens in a morally difficult situation, like a car crash or the trolley problem? The robot becomes a machine that enforces a specific ethical framework on to the situation, that framework being an extension of the original programming of the machine.
Instead of being in control of our own moral actions, we find ourselves tied to the tracks with a machine stood behind the lever. This machine is only acting within the framework of its programming, so the ethics of a third party are being enforced on our situation. The questions arise: Is this ethical, and should it be prohibited?
It shouldn’t be prohibited
Let’s start with the positives of this scenario. When a machine’s firmware is being designed and programmed, it is likely that more than one person is responsible. When a team of individuals work together to decide how a machine should behave in a crisis, a whole range of moral predispositions are likely brought to the table. While many options may be considered, a reassuring outcome is that, ultimately, the result will be a democratically determined ethical framework for the machine to operate under.
Furthermore, programmed morality allows for the introduction of ethical legislation on an organisational level, taking the ethical management responsibilities out of the hands of the designer and into the hands of a company or government. Such a body’s priority would be to present themselves as safe and trustworthy by protecting their stakeholders and avoiding a macro-ethical failure. Moral decisions backed by the majority of people can certainly be construed as ‘optimum’, as is represented by moral prescriptivism; the shared and prescribed code of ethics in society,
Is having the freedom to make your own choices in a life or death situation worth compromising the safety of all involved? It’s arguable that it isn’t. In the sci-fi film I-Robot, Will Smith’s character is resentful of a robot for saving his life in a car crash when it could have saved a young girl. This example of a machine imposing third-party ethics is poignant in it’s examination of micro-ethics but neglects the fact that, without the machine, both people would have died. This highlights the negative ethical implications of rejecting programmed morality completely. If a user chooses to avoid technology like this, lives are risked in favour of ethical freedom as engineers compromise the effectiveness of potentially life saving machines so as to avoid ethical complications. In the light of a reality where both Will Smith and the young girl are killed, this could seem selfish and unethical.
Furthermore, what if a psychopath is in control of such a machine?
Would it be acceptable for an engineer to design a machine in such a way as to prohibit its use by unethical people? Although it means resorting to moral realism, society would likely agree that this might be the right thing to do. A car that shuts down before a dangerous road rage occurs or a gun that refuses to fire during a mass shooting would impose moral frameworks on the users against their will, but only when the users’ own moral framework was warped and out of sync with the rest of society. From a utilitarian standpoint, removing ethical anomalies from society that would otherwise cause great suffering is easily justified and mirrors the moral prescriptivism enforced by the police which society already accepts.
It should be prohibited
Unfortunately, as technology progresses, so does the ability of those wishing to exploit it. An ethics enforcing machine brings with it the inherent vulnerabilities of a digital system, vulnerabilities that are not associated with a system under human control. In vehicles specifically, this places the security of the occupants and other road users in question.
The core principle here is that the existence of these machines now enables those with poor, misguided morals to enforce them unwillingly upon others. The removal of ethical anomalies mentioned previously is flipped on its head when anyone can tamper with the ethics in play.
If we consider the potentially fatal cocktail of:
- Cloud technology being used to partially control AI vehicles on mass
- The number of vehicles on any given road at any given time which, for London alone, is an average of 21,500 per road per day
- The increasing capability of cyber attackers and malware
- Terrorism by remote control with no self-risk and no substantial resources.
…it becomes clear that the risk associated with a malicious attack is significant and should not be ignored. It could be argued that the existence of ethics enforcing machines that are inherently vulnerable to manipulation and exploitation is, in itself, unethical.
The intrinsic abuse of personal, ethical freedom takes this argument further. Inescapably, whether they are aware of it or not, users of ethics enforcing machines have their own moral liberties removed. When potentially faced with a life or death situation, a users personal response and individual moral framework is unconsidered. It’s arguable that every human has a right to enact their own morals in any situation, and it may be considered an infringement of basic human rights to revoke this choice.
Depending on the ethical framework we employ, the conclusions we draw will likely differ. From a hedonistic perspective, does existence of ethics enforcing machines bring more pleasure overall, even considering the potential risks involved? From an instrumentalist standpoint, could it be that the development of such machines is justifiable given that the developer is not making the decision to cause harm?
Contemplating the arguments discussed here, two options present themselves. Do we permit the existence of ethics enforcing machines via ethical justification or do we refuse to develop them.
What would you do?
Would you robut them?