The use of Artificial Intelligence (AI) is becoming more common in a variety of industries, the legal profession being no exception. This article examines the ethics of using an AI as a judge to determine the sentencing in court cases. Judges use sentencing guidelines that are laid out by the Sentencing Council in order to provide a greater consistency in sentencing among judges. An AI system recently correctly predicted the outcome of hundreds of Human Rights cases with an accuracy of 79%, demonstrating improvements are still needed. This article makes the ethical arguments based on certain assumed capabilities of an AI judge:
- It is assumed that the AI system will be unbiased.
- The code for the AI itself will be written and owned by the judicial system and not outsourced from a 3rd party.
- The algorithm will not be “black boxed” and will be open to interrogation.
Humans will continue to deliver the verdict, with AI solely used to determine the sentencing.
All in favour say ‘AI’
Corruption is a significant problem within the U.S. judicial system with an estimated one million bribes paid each year. A utilitarian framework shows this is unethical as it benefits the few that are able to afford to pay bribes, while the remaining population suffers. The introduction of an AI judge would prevent corruption and deliver fair, just and consistent sentencing as it cannot feel temptation, greed or pressure. This would benefit a larger proportion of the population, and produce the greatest balance of good over harm. A reduction in corruption and bribery would also result in an increase in the positive public perception of the government, which further increases the overall good achieved through the implementation of AI. This benefit to the greatest number of people means it is ethically right to introduce AI as a judge according to a utilitarian framework.
The utilitarian framework further supports the use of AI in the courtroom through improved efficiency of the legal system, resulting in a faster sentencing throughput, benefiting everyone. Fewer innocent people’s lives would be negatively affected by the lengthy timelines of criminal court cases, this is the time between the initial offence to completion of the case, which currently averages 109 days for charge cases in the UK legal system. An implication of implementing AI is the potential increase of wrongful incarcerations due to statistical anomalies in the data that the AI was trained on. However, the overall increase in efficiency of the entire judicial system would benefit both the innocent who are wrongly accused and the taxpayer due to a reduction in case times, thus favouring the majority.
All humans show an element of implicit bias against people of other races and genders. This can be seen within the legal system where a young black male is almost seven times more likely than a young white male to receive a custodial sentence of 12 months or longer at a Crown Court. Discrimination is not something anyone would wish upon themselves, and therefore is unethical through duty ethics and a deontological framework, which classifies ethics by a strict set of rules. AIs do not display bias or discrimination, and therefore have the potential to deliver ethical sentencing according to deontology. In this situation the AI system could act in a more moral and ethical way than their human counterpart, and thus contribute to a fairer legal system.
The case against AI
As previously explained, within a utilitarian framework, AI would improve the efficiency of the legal system resulting in a net positive. However, this benefit would come at the cost of more individuals wrongly incarcerated, due to the higher sentencing rate. From the perspective of duty ethics the instances of wrongful incarceration are highlighted. This ethical framework classes wrongful incarcerations as immoral as it is a universal law that one would not wish to be punished for a crime they did not commit. Hence, using AI as a judge would be morally wrong as the number of immoral actions would increase, for within the framework of duty ethics it is better that ten guilty men go free than one innocent man suffer.
A further argument against the use of AI is found within the virtue ethic framework. An individual’s past has a large bearing on their decision making; as such, while a person may commit a crime, any virtuous person may do the same given identical circumstances meaning the action is not morally wrong. It is important for a judge in determining whether an act is right or wrong to consider what a virtuous person would do under the same circumstances and account for that in the sentencing. For example, the sentencing in a murder case should consider the circumstances and the motivation of the defendant. However an AI cannot grasp the concept of virtues, and would be unable to empathise with the defendant or imagine the actions of any virtuous person, and would discount this in the sentencing. Thus virtue ethics renders the AI immoral. This reveals another flaw of an AI judge, its inability to determine virtues and to understand how they adapt as cultures change.
Cultural changes advance societal morals, driving the development of the legal system. This is seen through the decisions made by judges; such as in the case of Brown v. Board of Education of Topeka in 1954 where the court declared segregated schools to be unconstitutional, overturning a decision from 1896. AI judges would not facilitate a dynamic legal system, as they would be unable to adapt to the developing societal morals. Care ethics states that morals develop with time and that for a decision to be ethically right it is essential that it meets the needs of the society. Therefore, the use of an AI judge would be ethically wrong as it would be unable to overturn decisions and pass sentences to reflect this development of morals and meet the needs of society.
To conclude, a sufficiently advanced AI could overcome some of the challenges but with the assumed capability it would be morally wrong to use an AI judge.