SHOCKING: Find Out What the Government Doesn’t Want You to Know About AI!

Group 37

In February 2013 a Wisconsin man, Eric Loomis, became the first criminal to receive a sentence strongly influenced by artificial intelligence (AI). Loomis was determined high-risk by COMPAS, the risk-assessing algorithm, and subsequently sentenced to 6 years in prison. The sentence was followed by an appeal, arguing that the risk score violated human rights due to the lack of transparency of the private company’s algorithm. The appeal was rejected.

Following the Loomis case, COMPAS’s algorithm had aided in the sentencing of more than 7,000 US arestees by 2016, drastically reducing the time and cost of legal trials. Now many argue that the algorithm is highly biased and causing unfair sentencing. Should these algorithms become a vital legal tool, or are they adding to the corruption of our prosecution systems?

The Good of the People?

The long term happiness of the population favours the implementation of AI sentencing software, supporting the technique from a Utilitarian standpoint. The improved turnaround time of cases by implementing the software will more efficiently advance criminals to a path of reformation, resulting in them contributing beneficially to society. AI will also decrease a defendant’s time in holding, benefiting the general public by saving taxpayers money. Alongside the immediate reduction in process time, a long term implication of the software will lead to future innovations within the field, with the goal of a fully automated judicial system. This reaps the benefits for the masses by reducing trial costs, as well as removing the need for tedious jury service.

It could be argued however that majority happiness is not achieved in the short term as high capital costs could be seen as undesirable. The software has also been incorrect in many cases, where criminals determined as low risk have quickly reoffended and criminals judged as high risk have then gone on to never reoffend.  This raises doubts about validity and consistency of such unreliable software, as well as the morality of introducing it to such an important role. Therefore, it cannot be said that introducing the software provides happiness to the majority of the population, as there is possibility for truly high risk criminals to serve shorter sentences and return to the streets quicker. This is dangerous for the general public and unjust on those criminals deemed wrongly as high risk.

Stickler for the Rules

AI assisted sentencing can refer to a database of all previous trials, to which the trial in question could be compared against; this would provide the fairest, most moral result, by applying standards and regulations to uphold Kant’s moral law. The necessary legal teams can only recall details from a number of cases limited by the memory of the individuals. By collating historic cases of a similar nature the system could remove the sentimental or prejudice bias frequently shown by judges. The sentencing software will also help nullify the effects of briberies to judges in countries rife with corruption, as the software cannot be influenced by humans in court.

The underlying dataset can be argued to be morally questionable. According to reports, judges issue on average 19.1% longer sentences to black men than to white men who have committed the same crime. If the databases provided to the AI already show an overwhelming bias to give a black people a longer sentence, this racial prejudice could be perpetuated as the AI can only refer to previous cases. This in effect is social and racial oppression to a large percentage of the population.

A problem also arises when a case has no obvious similarities to any previous cases, decreasing the reliability of the software. AI cannot reliably understand the inner workings of the human brain and a person’s true nature after asking 137 multiple choice questions. The software could also be said to be immoral as COMPAS, a private company, has developed the software in such a way that makes it impossible to understand how the sentence was constructed. You could argue that the software violates aspects of Kantism as the nature of the code makes it impossible to understand how the risk score was determined, all that is known are the inputs and outputs. Every criminal has the right to understand the construction of their sentence in terms of criminal history, social background and character etc., something that this software does not allow.

The Facts, the Whole Facts, and Nothing but the Facts

Part of the current sentencing process involves assessing an individual’s potential to reoffend, based upon their character predating the crime. This often causes public outcry as first-time offenders of violence receive reduced sentences as little is known about them. An AI system could analyse the background information to only use facts that directly relate to the law, omitting irrelevant information like personal details, for example.

The sentencing algorithm can also be considered as a violation of virtue ethics. If virtue ethics is the focus of the behaviour of an acting person, how could a non-sentient machine or software possibly have the ability to attain any moral characteristics? As the software doesn’t have this ability of developing desirable traits, it cannot be deemed as moral if the code has no knowledge of what a moral decision actually is.

Northpointe, the company that developed the COMPAS algorithm, are completely within their right to develop the software as they please, as are the government to adopt and use the software. However, implementing the software could have an adverse effect on the rights of the people subject to its decisions. Requiring for this ethical argumentation. Let us consider a defendant who has been judged high risk by the software after committing a purely accidental crime. The subject has a clean criminal record, good character and a good background. By not allowing the arrestee to know how the sentence has been constructed, their wellbeing will be significantly damaged.

It is obvious that AI sentencing software is the T-1000 of the criminal prosecution system – a technique that has to be terminated.

10 thoughts on “SHOCKING: Find Out What the Government Doesn’t Want You to Know About AI!

  1. Interesting article, I agree that currently the technology is not suitable for use in such a serious application. However, I do believe that maybe this technology does have potential to become a useful tool, maybe as an aid but not a direct cause of the final decision?

    I believe that this can be considered similar to university marking and depending on the mood of the marker/judge can depend strongly on the response given. As AI does not have mood swings surely this can improve the consistency of the responses given?

    1. Yes, I agree that the technology would be better off being used as a helpful tool rather than a direct decisions maker. But this may lead into interpretation of the judge who may be biased themselves. Also, this would not aid with the fact that it is not possible to know how the decision was made – all you will know is that a machine has deemed you high or low risk.

      Comparing this to marking methods is interesting; the technology will eliminate the judge’s ‘mood at the time’ approach of sentencing as the AI algorithm is incapable of having a ‘mood’. Maybe some research needs to be done to assess how much sentences are affected by the judge’s mood.

  2. Great article!

    I think it’s an interesting use of the technology but feel that as a human creates the code for the AI that they could always be some form of bias. Also what happens if someone would hack the software to either give someone a worse/better sentence?

    1. Some good points lemons123. It was found in a reference of this article that many individuals have an underlying racial bias, even if they considered themselves anti-racist in any way. Therefore there will also be a chance of the code containing some racial bias. Also the AI may be capable of learning to become racially biased, through using methods from previous cases.

      Hacking of the software was a raised concern. However, the algorithm is created using blockchain methods meaning the code is theoretically ‘unhackable’, so this may not be an issue after all.

  3. I understand that the software in its current state may have been proven to be racially biased, but what’s stopping us from getting rid of that aspect? In my opinion we will never be able to get rid of racial biases in judges, but surely taking the bias out of a man-made computer code is easy. Once the racist lines of the code are removed, I can’t see any negatives of using this software compared to human judges?

    1. I agree that racial bias is impossible to remove from human judges, and that it may be possible to remove the racial bias from COMPAS’s software. However, even with a completely anti-racist software the nature of the code (blockchain) can never allow the user to understand how a risk score was constructed. I believe that racism aside, the software will always violate the Kantism ethical theory as it removes the defendant’s right of understanding how their sentence was constructed.

      1. Ok fair enough, but is it even correct that the defendant should have the right to understand the construction of their sentence? These people are in the position they are from committing a crime which mostly likely breached plenty of rights. So why do these criminals deserve to know how their sentence is constructed?

        1. This comes into maybe another ethical argument regarding the rights that defendants should be given. But the same could be argued for the prosecution side in court. If a criminal is given a seemingly short sentence then the prosecutors will want to know why but will not be able to find out due to the software’s nature.

  4. I found the description of jury service as ‘tedious’ contentious; many might value jury service as a chance to witness and participate in the justice process.

    As the sole source of sentencing, I’m against AI, as support for the judge I am in favour.

    Reading through your article, I found it difficult to identify the ethical theories being used, pay attention to this in assignment two please.

  5. I think it’s awful that this software is being used even after being proven to be racist. We are still miles away from where we need to be to employ AI in applications like this one. You will always get the odd racist judge but this is rare, and humans are far more capable than computers in making judge of character. With so many people effected in criminal cases it is wrong to leave any sort of decision up to a computer.

Leave a Reply