SHOCKING: Find Out What the Government Doesn’t Want You to Know About AI!

Group 37

In February 2013 a Wisconsin man, Eric Loomis, became the first criminal to receive a sentence strongly influenced by artificial intelligence (AI). Loomis was determined high-risk by COMPAS, the risk-assessing algorithm, and subsequently sentenced to 6 years in prison. The sentence was followed by an appeal, arguing that the risk score violated human rights due to the lack of transparency of the private company’s algorithm. The appeal was rejected.

Following the Loomis case, COMPAS’s algorithm had aided in the sentencing of more than 7,000 US arestees by 2016, drastically reducing the time and cost of legal trials. Now many argue that the algorithm is highly biased and causing unfair sentencing. Should these algorithms become a vital legal tool, or are they adding to the corruption of our prosecution systems?

The Good of the People?

The long term happiness of the population favours the implementation of AI sentencing software, supporting the technique from a Utilitarian standpoint. The improved turnaround time of cases by implementing the software will more efficiently advance criminals to a path of reformation, resulting in them contributing beneficially to society. AI will also decrease a defendant’s time in holding, benefiting the general public by saving taxpayers money. Alongside the immediate reduction in process time, a long term implication of the software will lead to future innovations within the field, with the goal of a fully automated judicial system. This reaps the benefits for the masses by reducing trial costs, as well as removing the need for tedious jury service.

It could be argued however that majority happiness is not achieved in the short term as high capital costs could be seen as undesirable. The software has also been incorrect in many cases, where criminals determined as low risk have quickly reoffended and criminals judged as high risk have then gone on to never reoffend.  This raises doubts about validity and consistency of such unreliable software, as well as the morality of introducing it to such an important role. Therefore, it cannot be said that introducing the software provides happiness to the majority of the population, as there is possibility for truly high risk criminals to serve shorter sentences and return to the streets quicker. This is dangerous for the general public and unjust on those criminals deemed wrongly as high risk.

Stickler for the Rules

AI assisted sentencing can refer to a database of all previous trials, to which the trial in question could be compared against; this would provide the fairest, most moral result, by applying standards and regulations to uphold Kant’s moral law. The necessary legal teams can only recall details from a number of cases limited by the memory of the individuals. By collating historic cases of a similar nature the system could remove the sentimental or prejudice bias frequently shown by judges. The sentencing software will also help nullify the effects of briberies to judges in countries rife with corruption, as the software cannot be influenced by humans in court.

The underlying dataset can be argued to be morally questionable. According to reports, judges issue on average 19.1% longer sentences to black men than to white men who have committed the same crime. If the databases provided to the AI already show an overwhelming bias to give a black people a longer sentence, this racial prejudice could be perpetuated as the AI can only refer to previous cases. This in effect is social and racial oppression to a large percentage of the population.

A problem also arises when a case has no obvious similarities to any previous cases, decreasing the reliability of the software. AI cannot reliably understand the inner workings of the human brain and a person’s true nature after asking 137 multiple choice questions. The software could also be said to be immoral as COMPAS, a private company, has developed the software in such a way that makes it impossible to understand how the sentence was constructed. You could argue that the software violates aspects of Kantism as the nature of the code makes it impossible to understand how the risk score was determined, all that is known are the inputs and outputs. Every criminal has the right to understand the construction of their sentence in terms of criminal history, social background and character etc., something that this software does not allow.

The Facts, the Whole Facts, and Nothing but the Facts

Part of the current sentencing process involves assessing an individual’s potential to reoffend, based upon their character predating the crime. This often causes public outcry as first-time offenders of violence receive reduced sentences as little is known about them. An AI system could analyse the background information to only use facts that directly relate to the law, omitting irrelevant information like personal details, for example.

The sentencing algorithm can also be considered as a violation of virtue ethics. If virtue ethics is the focus of the behaviour of an acting person, how could a non-sentient machine or software possibly have the ability to attain any moral characteristics? As the software doesn’t have this ability of developing desirable traits, it cannot be deemed as moral if the code has no knowledge of what a moral decision actually is.

Northpointe, the company that developed the COMPAS algorithm, are completely within their right to develop the software as they please, as are the government to adopt and use the software. However, implementing the software could have an adverse effect on the rights of the people subject to its decisions. Requiring for this ethical argumentation. Let us consider a defendant who has been judged high risk by the software after committing a purely accidental crime. The subject has a clean criminal record, good character and a good background. By not allowing the arrestee to know how the sentence has been constructed, their wellbeing will be significantly damaged.

It is obvious that AI sentencing software is the T-1000 of the criminal prosecution system – a technique that has to be terminated.

34 thoughts on “SHOCKING: Find Out What the Government Doesn’t Want You to Know About AI!

  1. Interesting article, I agree that currently the technology is not suitable for use in such a serious application. However, I do believe that maybe this technology does have potential to become a useful tool, maybe as an aid but not a direct cause of the final decision?

    I believe that this can be considered similar to university marking and depending on the mood of the marker/judge can depend strongly on the response given. As AI does not have mood swings surely this can improve the consistency of the responses given?

    1. Yes, I agree that the technology would be better off being used as a helpful tool rather than a direct decisions maker. But this may lead into interpretation of the judge who may be biased themselves. Also, this would not aid with the fact that it is not possible to know how the decision was made – all you will know is that a machine has deemed you high or low risk.

      Comparing this to marking methods is interesting; the technology will eliminate the judge’s ‘mood at the time’ approach of sentencing as the AI algorithm is incapable of having a ‘mood’. Maybe some research needs to be done to assess how much sentences are affected by the judge’s mood.

    2. Thanks for your comment Sad_Harold. I agree with your point on the AI being used as an aid. I believe that the AI technology is an extremely interesting concept and has potential to optimise the justice system in the long term, but currently there are too many bugs for the system to provide a morally justified service for the criminals.

  2. Great article!

    I think it’s an interesting use of the technology but feel that as a human creates the code for the AI that they could always be some form of bias. Also what happens if someone would hack the software to either give someone a worse/better sentence?

    1. Some good points lemons123. It was found in a reference of this article that many individuals have an underlying racial bias, even if they considered themselves anti-racist in any way. Therefore there will also be a chance of the code containing some racial bias. Also the AI may be capable of learning to become racially biased, through using methods from previous cases.

      Hacking of the software was a raised concern. However, the algorithm is created using blockchain methods meaning the code is theoretically ‘unhackable’, so this may not be an issue after all.

    2. Thanks for your comment lemons123. I think the fact that the system is designed by a private company is the main issue here. We do not know the background of the people creating this code and their views. Potentially AI could be investigated by a government body in order to regulate the production of such systems to ensure fair sentences are provided.

    3. No system is completely unhackable. Especially if it’s a developing country, if they can’t afford the best cyber security surely corruption is highly likely. Additionally, what is to stop corrupt officials faking or lying about what the computer says. Far easier to cover up lies if less human officials are involved in the process.

      1. Sack_sarri understand what you’re saying although don’t see how it would be easier to cover up what is physically on a computer screen? Surely it’s easier for someone in court to just lie than it is to hide the results of a test.
        Also, the algorithm currently used is developed using blockchain technology, which is the same technology used to create cryptocurrency. Not sure on the science behind it but blockchain technologies are impossible to destroy or modify once they have been created and therefore cannot be “hacked”.

  3. I understand that the software in its current state may have been proven to be racially biased, but what’s stopping us from getting rid of that aspect? In my opinion we will never be able to get rid of racial biases in judges, but surely taking the bias out of a man-made computer code is easy. Once the racist lines of the code are removed, I can’t see any negatives of using this software compared to human judges?

    1. I agree that racial bias is impossible to remove from human judges, and that it may be possible to remove the racial bias from COMPAS’s software. However, even with a completely anti-racist software the nature of the code (blockchain) can never allow the user to understand how a risk score was constructed. I believe that racism aside, the software will always violate the Kantism ethical theory as it removes the defendant’s right of understanding how their sentence was constructed.

      1. Ok fair enough, but is it even correct that the defendant should have the right to understand the construction of their sentence? These people are in the position they are from committing a crime which mostly likely breached plenty of rights. So why do these criminals deserve to know how their sentence is constructed?

        1. This comes into maybe another ethical argument regarding the rights that defendants should be given. But the same could be argued for the prosecution side in court. If a criminal is given a seemingly short sentence then the prosecutors will want to know why but will not be able to find out due to the software’s nature.

  4. I found the description of jury service as ‘tedious’ contentious; many might value jury service as a chance to witness and participate in the justice process.

    As the sole source of sentencing, I’m against AI, as support for the judge I am in favour.

    Reading through your article, I found it difficult to identify the ethical theories being used, pay attention to this in assignment two please.

  5. I think it’s awful that this software is being used even after being proven to be racist. We are still miles away from where we need to be to employ AI in applications like this one. You will always get the odd racist judge but this is rare, and humans are far more capable than computers in making judge of character. With so many people effected in criminal cases it is wrong to leave any sort of decision up to a computer.

    1. Thanks jimbob, you have mentioned that there will always be racist judges but are still in favour of keeping completely human decision making. There is still possibility for the AI algorithm to be developed in a way that isn’t racist. Is it wrong to leave decisions up to computers but right to leave decisions up to potentially racist judges??

      1. Action should be taken to ensure that there is no racism within our courts, not just introduce computers to cover up the problems. They should put each judge through a screening and background check themselves to make sure that they have no underlying biases…..

  6. The technology behind AI judicial systems should NEVER be allowed to advance! It is clear to me that this technology is being used to oppress minority cultures, and removes the blame from any individual. This is institutionalised racism living and breathing before out very eyes!

    This technology makes me sick.

    1. Good to see that you’re so passionate about the subject, Ben. Do you think there is potential to use the algorithm to develop a standardised process for sentencing (without the racism). This way it removes the risk of the sentence being determined by a racist judge or simply just a judge in a bad mood?

      1. Call me a conspiracy theorist, but I just don’t trust the companies, like Northpointe, that are developing the software. The companies are being granted too much power to determine someone’s future. Then not being able to give a meaningful breakdown on how the case has been constructed is just ludicrous.

        As for the case of an angry or racist judge, their negative effects are easily singled out as the act of an individual. With racist software, like COMPAS, it is harder, if not impossible to find the perpetrator. Not to mention that a racist judge’s career would span up to 50 years (if they were not caught), whereas this software could be used indefinitely; immortalising these racial prejudices in the society.

    2. I completely disagree. I think AI advancement is crucial to push humanity in the next stage of technology. However, I do believe that the system requires far more development and rigorous checking procedures before being implemented fully within our justice system.

  7. what i really believe is that you just need to care about the people. i think peoples best interest should always be the number one priority. if the peoples from the black communities feel as its negatively affecting them, they should exercise they’re right to be heard and say that they dont like this. people on gods green earth just need to have a little more compassion and using computers just isnt compassionate. but thats just my opinion haha lol xx

    1. I agree. Many criminals are a result of an impersonal social services system. Drug addicts, homeless, young offenders and foster care children are some of the most likely to reoffend after a previous conviction. An impersonal AI system is just another step to isolating these unfortunate people from real society.

  8. Just let the algorithm do the whole job, Courts are always making mistakes it is absurd. Yeah computers may no show compassion and stuff like that but it least they can’t make a wrong decision if all they can do is look at facts

    1. Not a chance. So basically you want us to be able to prosecute anyone based on their background or RACE?? We have been using court systems for 100s if not 1000s of years with pretty good success rate so why introduce something now and just mess it all up. Like I said before we should look at improving what we’ve got before we go introducing technologies that aren’t even proven yet!1

      1. I believe the authors have made clear that the undesirable biases of an AI system is due to it being based on data of past cases of human judgement that have been ethically wrong. Put simply, the problems with an AI system is that is based on human behaviour which hasn’t always been morally right.

        This highlights how leaving the courts to continue as they are may not be the right thing to do as surely humans are more susceptible to making a badly made, bias- influenced decision. Obviously the technology is not ready yet to be implemented but with the current speed of the development process of AI, it is reasonable to assume in the future an AI system will be created that is unbiased. Getting this system developed should be a key objective for the justice system as humans will always make mistakes and be influenced by bias and getting rid of these ethically/morally wrong judgements would be a big step forward for society.

        1. So let’s say hypothetically that they get this AI system bang on, all racism gone and what not. Would you still be okay with having court decisions made by computers? Sometimes in these cases we need real compassion and empathy for either the defendant or the prosecutors. How can we program a computer-code to have such traits? We need the sentencing process to really look at the wider picture and consider all involved parties.

    2. The software would build on the mistakes of previous judges, building on a history in the USA where racial segregation was prevalent until the later half of the 20th century. The data from these case would undoubtedly be used as training date for the AI. Therefore the AI would repetitively make mistakes. It’s sick that you could actually believe that this is okay. I bet you voted Trump.

      1. Obviously those laws have been redressed and changed, such as everyone being able to enjoy bus seats as equals now. Surely AI would not even take these cases into acount, just as many old UK laws have been disposed of. Let’s make justice great again.

    1. Bit narrow minded don’t you think mate? You’re just saying that we’ve got problems but only want a quick fix for it that doesn’t actually solve any problems. Fixing ‘from the ground up’ takes time and would require proper training of all legal personnel.

  9. I’m not too sure about across the pond, but I know that the prison system in the UK is very stretched. If we introduce this technique here then we will be stretching the system even more, as like the writers said it will get criminals behind bars quicker. I agree that it could get rid of some biases within the system but the money could be much better spent. We need to invest in ensuring criminals don’t reoffend rather than making sure they go behind bars.

    1. I think I have to agree with you here to be honest. But in terms of the technique being morally right or wrong what do you think?
      But yes money could be better spent, maybe the US has focused on the wrong thing here and should instead be stopping recidivism like you said!

  10. I don’t agree that the software violates Kantianism, surely it agrees with it? I’ve never heard anything before about defendants having a right to know how their sentence was constructed. I think that it agrees with Kantianism as it creates a new, more well rounded rule for how sentences are made. Saying that, I don’t agree with the technique and think that it is crucial that any decisions about a person’s future life should be made by a HUMAN.

Leave a Reply