RoboJudge – Is the Devil in the Data?

Group 2

The use of Artificial Intelligence (AI) is becoming more common in a variety of industries, the legal profession being no exception. This article examines the ethics of using an AI as a judge to determine the sentencing in court cases. Judges use sentencing guidelines that are laid out by the Sentencing Council in order to provide a greater consistency in sentencing among judges. An AI system recently correctly predicted the outcome of hundreds of Human Rights cases with an accuracy of 79%, demonstrating improvements are still needed. This article makes the ethical arguments based on certain assumed capabilities of an AI judge:

  • It is assumed that the AI system will be unbiased.
  • The code for the AI itself will be written and owned by the judicial system and not outsourced from a 3rd party.
  • The algorithm will not be “black boxed” and will be open to interrogation.

Humans will continue to deliver the verdict, with AI solely used to determine the sentencing.

All in favour say ‘AI’

Corruption is a significant problem within the U.S. judicial system with an estimated one million bribes paid each year. A utilitarian framework shows this is unethical as it benefits the few that are able to afford to pay bribes, while the remaining population suffers. The introduction of an AI judge would prevent corruption and deliver fair, just and consistent sentencing as it cannot feel temptation, greed or pressure. This would benefit a larger proportion of the population, and produce the greatest balance of good over harm. A reduction in corruption and bribery would also result in an increase in the positive public perception of the government, which further increases the overall good achieved through the implementation of AI. This benefit to the greatest number of people means it is ethically right to introduce AI as a judge according to a utilitarian framework.

The utilitarian framework further supports the use of AI in the courtroom through improved efficiency of the legal system, resulting in a faster sentencing throughput, benefiting everyone. Fewer innocent people’s lives would be negatively affected by the lengthy timelines of criminal court cases, this is the time between the initial offence to completion of the case, which currently averages 109 days for charge cases in the UK legal system. An implication of implementing AI is the potential increase of wrongful incarcerations due to statistical anomalies in the data that the AI was trained on. However, the overall increase in efficiency of the entire judicial system would benefit both the innocent who are wrongly accused and the taxpayer due to a reduction in case times, thus favouring the majority.

All humans show an element of implicit bias against people of other races and genders. This can be seen within the legal system where a young black male is almost seven times more likely than a young white male to receive a custodial sentence of 12 months or longer at a Crown Court. Discrimination is not something anyone would wish upon themselves, and therefore is unethical through duty ethics and a deontological framework, which classifies ethics by a strict set of rules. AIs do not display bias or discrimination, and therefore have the potential to deliver ethical sentencing according to deontology. In this situation the AI system could act in a more moral and ethical way than their human counterpart, and thus contribute to a fairer legal system.

The case against AI

As previously explained, within a utilitarian framework, AI would improve the efficiency of the legal system resulting in a net positive. However, this benefit would come at the cost of more individuals wrongly incarcerated, due to the higher sentencing rate. From the perspective of duty ethics the instances of wrongful incarceration are highlighted. This ethical framework classes wrongful incarcerations as immoral as it is a universal law that one would not wish to be punished for a crime they did not commit. Hence, using AI as a judge would be morally wrong as the number of immoral actions would increase, for within the framework of duty ethics it is better that ten guilty men go free than one innocent man suffer.

A further argument against the use of AI is found within the virtue ethic framework. An individual’s past has a large bearing on their decision making; as such, while a person may commit a crime, any virtuous person may do the same given identical circumstances meaning the action is not morally wrong. It is important for a judge in determining whether an act is right or wrong to consider what a virtuous person would do under the same circumstances and account for that in the sentencing. For example, the sentencing in a murder case should consider the circumstances and the motivation of the defendant. However an AI cannot grasp the concept of virtues, and would be unable to empathise with the defendant or imagine the actions of any virtuous person, and would discount this in the sentencing. Thus virtue ethics renders the AI immoral. This reveals another flaw of an AI judge, its inability to determine virtues and to understand how they adapt as cultures change.

Cultural changes advance societal morals, driving the development of the legal system. This is seen through the decisions made by judges; such as in the case of Brown v. Board of Education of Topeka in 1954 where the court declared segregated schools to be unconstitutional, overturning a decision from 1896. AI judges would not facilitate a dynamic legal system, as they would be unable to adapt to the developing societal morals. Care ethics states that morals develop with time and that for a decision to be ethically right it is essential that it meets the needs of the society. Therefore, the use of an AI judge would be ethically wrong as it would be unable to overturn decisions and pass sentences to reflect this development of morals and meet the needs of society.

Initial Decision

To conclude, a sufficiently advanced AI could overcome some of the challenges but with the assumed capability it would be morally wrong to use an AI judge.

37 thoughts on “RoboJudge – Is the Devil in the Data?

  1. Interesting read. It’s got me thinking, if the judge/jury system becomes obsolete would there be any need for human lawyers? The facts of the case need only be presented and the AI’s code determines if and what charges and punishment should be meted out.

    Back to the article though: In the case against AI, you mention there would be a higher rate of wrongful incarcerations. How so? Assuming the bribery factor is taken out, and that human coders of the AI’s justice algorithm do not implement their own biases and the accused are unable to somehow hack the system for favourable verdicts, it would seem to me that the chances of an unjust ruling and consequent sentence would be slim. The facts and evidence are presented and the AI weighs the arguments of motive, means and opportunity. And if the threshold is met or isn’t, a measured decision is made.

    Burgeoning AI use across professions will very likely involve a lot of human handholding for years to come. You rightfully point out that a previously convicted individual may take the same actions as a “virtuous” person under similar circumstances. This would be the point where a human judge may guide or overrule the AI’s prescription.

    Again, an insightful read. We’ll be at the edge of our seats for the final verdict.

    1. Really interesting arguments! Had some thoughts about the first paragraph of objections to the use of AI though:
      1) Why would the sentencing rate increase?
      2) Not sure all deontologists would agree that it is ‘better that ten guilty men go free than one innocent man suffer’ – some duty-based systems might argue that a harsher state is better since discipline/order is an inherently good thing, or that the dangers of having criminals on the loose should be minimised at all costs. On the other hand you could have deontologists who believe that maximising freedom is a ‘perfect duty’ (like in Kant’s system) and no one should even be arrested if there are any doubts about whether they are guilty.

  2. A very interesting concept and it’s clear you’ve spent a fair bit of time thinking through the impacts of replacing people with technology in the law courts. It’s shocking to find out that there are a million bribes paid in the USA a year – a clear sign that there is work to be done if they are to be a truly just society. However, one thing I would challenge is the assumption that the AI will be unbiased as I think that is one of the main battlegrounds for the implementation of AI. It has been documented ( that algorithms have been shown to be biased, if we are to have a discussion about AI we need to talk about it as it is.

    Your utilitarian argument, I think argues from an idealist perspective (something you seem to recognise later on). Right now, AI would not make utilitarian sense as it would disadvantage more people than it helps, given its success rate of only 79% (vs. the 2% of people who have been involved in bribery). Although you do identify the wrongness of this from a virtue ethic position, I think it is also wrong from a utilitarian perspective. However, its strength in speeding up processing time would be a very welcome improvement to the current legal system with its bureaucratic lethargy.

    Your final argument about changing societal norms would be difficult for AI to take into consideration is an interesting one. I would have thought that as the whole point about AI is that it’s continually learning and so can respond to a change in its environments. However if it is not able to meet the changing demands of a society then no society would want to take it on.

    Something I would like to find out more about is how an algorithm would weigh up guilt/innocence. If it cannot take into account the totality of circumstances surrounding a crime then will it ever be suitable to deploy? What metrics does it take into consideration?

    I agree with your conclusion though would have liked to have seen it more fleshed out. What also would have been interesting is seeing the different ethical frameworks (utilitarian, care, virtue) being used on both sides of the argument.

  3. Interesting read. Arguments against the implementation of AI centre seem to centre largely on what is morally right/wrong, but a possible counter argument would be that morality should be an objective standard and so where a judge might take into account that “while a person may commit a crime, any virtuous person may do the same given identical circumstances meaning the action is not morally wrong”, the action would still be morally wrong as any criminal act is morally wrong under an objective standard of morality. Hence, the use of AI might actually be more beneficial in keeping order in society as it might prevent a floodgate of claims appealing to a judge’s perceived sense of morality based on a defendant’s past experiences/current circumstances/society as it stands. However, I do agree with your overall stance and in the name of justice and law as a means of protection of the innocent, I definitely agree that the use of AI in courtrooms would be morally wrong.

    1. Hi Ann. I agree with you and think that there has got to be some sort of objective standard in the justice system. However, in my opinion I think that while an act can be objectively wrong on its own, in some extreme situations this could be seen as the right thing to do. While the act should still be punished because it is objectively wrong, I think this is where different sentencing should be applied, taking into account the circumstances. This can and does lead to some discrepancies in sentence length for similar crimes, but I think removing any human interaction from this process would lead to bigger issues. I think AI has many benefits, but with something that could affect someone’s life as dramatically as prison sentence length, I do not think that AI is something to be experimented with.

      1. I disagree. I think that there are rules and laws for a reason and that the sentencing for these should be consistent. I do not think that anyone should be pardoned or given a reduced sentence if they have been judged guilty. Maybe not solely to do with sentencing, but seeing the Hillsborough case in the news again recently is a reminder of what a terrible job we sometimes do of serving up justice. How long has this case been going on for? Hopefully the implementation of AI would be able to do deliver faster sentencing .

  4. An interesting issue to bring up. It does bring up concepts such as AI lawyers and jury for the justice system. Both arguments are good and would benefit from adding more ethical frameworks in the article. Also there are different types of law and it seems as if this is largely based on criminal law. Would AI judges be more suitable for cases that involve family, equity property, environmental or corporate law? These are areas where the sentences depend on the programming of the AI which are the opinions of the programmers. Would that be ethically right and how could that affect the justice system?

  5. It is important to hear you highlight the problems of corruption in the legal system. This is seen in many placed across the world ( ) with many of those countries with the biggest problem also being some of the poorest in the world. AI gives at least an initial filter to root out prejudices and bribery. However, I also think that after such a filter there is value in some form of right to appeal to a human court where all factors can be considered and case precedent can be challenged.

    1. I agree with The Traveller. Working in the legal profession myself, I have encountered and heard of much corruption going on and I do not think that is even scratching the surface. Any way in which this can be eliminated would lead to a more fair justice system in my opinion. Especially when it comes to cases regarding large multinational corporations. I do not think that currently AI can solve all the problems with the justice system, but it is definitely an interesting thought.

  6. Morally wrong or not – you probably wouldn’t want a lecturer marking a 15,000 word dissertation on a basic (or even complex) algorithm, so to contemplate an AI using a similar system to determine sentencing in a criminal case is probably not yet feasible. Rigidity and the law ain’t such happy bedfellows anyway and it’s the human element in a trial that evokes some level of faith in the system.

    1. While I think that the idea of AI making such a huge decision is frightening at present, I do believe that the capabilities of AI will far surpass our expectations and in this case will make sentencing much more efficient and consistent. While the removal of the humanity from a decision could cause problems, is it not the human input at present that leads to corrupt and inconsistent decisions? Maybe we would be better of with a more rigid legal system.

  7. An interesting read, and I’m inclined to agree with your conclusion. It seems like a case of short-term gains at the expense of long-term progress. Yes, we could make the judiciary system more efficient and therefore less traumatic for those involved, but we’d run the risk of losing the impact of progressive values in constantly improving the system. However, if we move towards a model of sentencing that’s more focused on rehabilitation than retribution, the possibilities of AI could be really exciting – for instance, in predicting the outcomes of different initiatives. Similar AI technology is already being developed in healthcare, for example, in predicting outcomes for stroke patients and tailoring their treatment accordingly (Dr Nachev’s work in this article: Thanks for the food for thought!

  8. An interesting article with good use of the ethical arguments. I would comment that the overall topic is a little unclear. Is the proposal “Should AI be used in sentencing?” that is the length of sentence or the degree of penalty for someone found guilty is determined by AI not by a human?

    In your opening paragraph you talk about AI correctly predicting the outcome of cases but that seems to contradict your final sentence that AI is being used purely as a sentencing tool, not as a tool that assigns guilt or innocence.

    1. We included the sentence about the prediction of outcomes to set the scene and demonstrate the use of AI in the legal system and that currently it is not 100% accurate. We should have been clearer about the intended purpose of that sentence. You correctly discerned we are specifically addressing the idea that AI would be used in determining the length of the sentence, the degree of penalty, severity of punishment. We are not (although implied) looking at the idea of AI determining whether a person is guilty or innocent, that is decided by a human jury.

  9. Very interesting article that definitely poses some questions to ask yourself.

    While I see many benefits from the implementation of AI, I do wonder who is accountable when it goes wrong? As with any system, and even humans, mistakes and errors are made and someone is always accountable. Will one person be accountable for the whole implementation of it? And if it does go wrong what will the repercussions be? The stakes in this situation are very high and an innocent person’s life is potentially being changed forever. I think that this must be considered.

  10. A very interesting article. I am convinced by both sides and I can see why it’s controversial. It’s very well written and researched. However I do question whether it is possible to build an AI that is completely unbiased. Furthermore how do you determine or even measure bias? As for being against having AI’s ruling in courts. A good judge is not just a judge that abides by the rule book, but someone who can make nuanced and well informed decisions. Questions of morality can be complex and I am not sure an AI can be as reflective as a human.
    Another point is that humans would probably still trust the decision of another human being more than that of an AI. So what if influential people who usually bribe the judges decide to then pay a hacker who can hack into an AI? Can AI’s really get rid of injustice?

  11. An interesting read however I disagree with the idea that utilitarian ethics would allow for the imprisonment of innocent people, as this does not truly bring happiness to the population, and could in fact decrease happiness.
    In addition, it would be interesting to know how much of the overall time frame from charge to sentencing that the judge takes to decide on a sentence, as if this is a minority of the case, it would be unlikely that it would significantly reduce the cost to the tax payer.

  12. I think that the legal system like any other should embrace the change, however it seem like the algorithms have a long way to go if they can only predict 79% of sentencing cases.
    A jury of one’s peers is one of the fundamental pillars on which our criminal legal system is based and I don’t envisage that changing any time soon, having said that there are plenty of other areas of the law that are rule based rather than precedent based and involve simpler cases that could be automated.
    It seems that of the easy cases were taken over by the robo judge then it would allow, the judges, lawyers and other resources to be more dedicated to the complex cases which should decrease the miscarries of justice that occur, as a result of the pressures the current legal system is under
    Overall I thought it was a thought provoking article and provides a glimpse of the future

    1. To add onto my previous comment, life isn’t black and white, and I don’t know if an AI system can get around that. Obviously, an AI judge would start to remove bribery and corruption in lesser developed countries, but would you want to be judged by something that you don’t understand how it works?

      I think with a system like this it will be important to trial it slowly, for example, it could be used to aid a judge to start off with and if it works well it could eventually replace the judge. This way could be a less impactful way of seeing what problems might occur with the system.

      It is a complicated issue and it will be interesting to see how it progresses in the wider context of society.

  13. I think as more and more of life is automated some form of AI judge is inevitable. It should however not be introduced without a lot more thought and a lot more work carried out to thoroughly mitigate any foreseeable issues. Having said that the introduction of an unbiased AI judge code written and owned by the judiciary will be a huge step forward in creating fairness and equality of treatment in a system that is at present too open to bribery and corruption.

  14. A thought-stimulating article indeed! The arguments were presented so well that I had conflicting opinions on using AI as a judge in the legal sector. To me it is unclear why and how there would be an upsurge of unjust imprisonments/increased sentencing rate. Firstly, the fact that an AI judge would be impartial towards bribery is a huge “yes”, in my opinion. Unfortunately, in our world today, corruption is not uncommon, and an AI judge would not succumb to threats, as it does not have any emotional attachments. Having said that, the human element would not be completely eliminated as it would be humans who would design the algorithm/program the AI. In that sense, I disagree with the notion that an AI won’t be able to understand the concept of virtues because I believe that it can be programmed to fathom some degree of it.
    However, this is tricky because an AI judge will only be able to see situations as “black and white”, and not the “grey areas” and this is what cut it for me. In reality, humans are not robots themselves and it is difficult to consistently perform “black and white” actions.
    Therefore, I think that it would be unfair for an AI judge to pass sentences on humans because it does not have the capacity and empathy to relate to us. Unless, of course, in the future we would have AI so advanced that it could understand humans well.

  15. Is that a standard deviation your honour, or just mean?

    The utilisation of guidelines as part of the decision making process within the criminal justice system is a longstanding practice, as is the use of various matrices in an attempt to standardise and formalise the way in which both aggravating and mitigating factors can be considered and incorporated.

    In addition to the example referred to in the article re to the utilisation of the sentencing guidelines, both the police and CPS have utilised a Gravity Matrix to assist in decision making when considering how to dispose of a case following investigation. (

    The basic premise of this principle is that for each offence type, there should be a ‘normal’ disposal type; as a baseline more serious offences would result in a court disposal (charge etc), whereas the lesser offences would result in an out of court disposal, such as a caution etc.

    In order to reflect that not all cases involving an offence type are the same (eg shoplifting), the matrix enables both aggravating and mitigating factors to be taken into consideration. Following a review of these factors, the decision maker can raise the final score by one (meaning a more significant outcome), or reduce by one (lesser outcome).

    Similarly, the judicial system allows for other factors to be considered as part of the sentencing process, such as time spent on remand, an early plea etc

    Using these principles, coding could be written to reflect established practice and guidelines and a range of options similar to the above could be presented to the judge to assist decision making. These could include :
    • Initial sentence parameters
    • Ability to increase or decrease sentence further due to aggravating or mitigating factors particular to the case
    • Reduction for time on remand / early guilty plea
    • Display to the judge what the current normal distribution is for sentences issued for that offence type
    • Allow the judge to enter a proposed sentence, and highlight [back] to the judge the variation of their proposed sentence to the mean, and whether it is significant
    • Mandate that should there be a significant deviation, that they record their rationale for the deviation from the mean; the greater the deviation, the more robust the reason would need to be.

    Benefits of using such a system in this way include :
    1. An immediate highlight to the judge of any variation between their proposed sentence and the current ‘norm’, enabling them to reflect and amend their decision prior to sentence.
    2. Maintains an electronic record of all judges decisions, enabling easy identification of those who either periodically or systematically impose sentences that are different from the mean, particularly if there is statistical significance.
    3. Enables an electronic and statistical record to be made of such sentences, which the AI can use to determine what is an appropriate sentence based upon historical record.

    Over time the data held by the AI will grow, meaning that it can more accurately predict what, based upon previous decision making by educated and informed judges have made, is an appropriate sentence for that case. When there is sufficient correlation, that is the point the AI can assume the role of sentencing.

    The judge would continue to ensure that the rules of the court and evidence were followed

    The Jury (or magistrate) would determine whether the case was proven.

    AI would determine the outcome.

  16. An interesting read. I believe that using an AI would be a promising way of overcoming corruption in the government systems in the future, however, at this stage, I believe it is too early for using AI as a way for determining sentencing lengths due to the evidences presented in this article.

    In terms of the suggestion made that “AI judges would not facilitate a dynamic legal system, as they would be unable to adapt to the developing societal morals.” I was wondering if those societal morals could be encoded into the AI itself? And if so, could the AI be updated with these evolving societal morals for example on a periodic basis?

    Another thing that came to my made was how the AI would be dealing with evidences (who will control what evidences will AI get to analyse prior to making the final decision) and how to make sure it is robust to the false evidences (as in fake generated videos, fake photos, especially I think in today’s world its becoming an increasingly big problem, also false evidences given by the witnesses as witnesses could potentially be bribed as well?). Although this problem applies to the human judges as well, would AI be able to offer any improvement on that problem or would it be even more susceptible to the false evidences?

    as it is not obvious to me how to make such a system completely robust to all the outside factors (

  17. Doctors need to see patients to diagnose them. They are presented with the symptoms but this is not enough, they must see the patient, then they are able to join the dots of different symptoms displayed, but also pick up on clues or things that the patient doesn’t disclose that maybe useful, that they then investigate further, this would be impossible if only presented with the facts and not being able to relate with that person. I think that the same could be said for judges that actually a bit of trained intuition is needed that would be impossible for AI to possess. There are too many grey areas. In addition, as was mentioned in the article, AI cannot pick up on things like remorse which may have a bearing on the case.
    Of course the increase in equality in the process is a huge benefit, alongside the elimination of bribery, however it could be argued that if more innocent people are being sent to prison is it more ethical?
    One last thought could this system be hacked? And therefore what are the safe guarding procedures around this. Maybe a good compromise would be to use the AI to determine situation specific guidelines and the judge’s own intuition and training in partnership.
    Good article, clear and concise points, and thought provoking.

  18. Interesting article. I believe AI certainly has its place in the legal system, however, I think it needs a lot more improvement. You’ve given the figure of 79% and I feel that is surely too low to be handling something as critical as peoples lives. Long term fighting against bias and corruption I feel is where AI can help most so when it is up to standard I think it should be a very important part of the legal system. Another things perhaps that should be considered is that I don’t believe AI can every truly be taught empathy. I’m not sure if this plays a role in the sentencing, however, if someone considers a persons circumstances when giving a sentence then perhaps that is an important aspect to be considered?

  19. The point about discrimination , and eliminating bias is the most important one here , given the evidence of substantial bias when it comes to sentencing . AI could function in a similar way to the prompts that are used in medical clinical systems when prescribing : they serve to remind clinicians of problems , but can be overridden. In this way the AI could bring sentencing guidelines to the attention of a judge, and also provide an real-time audit function which could highlight if a particular judge was considering a sentence which was significantly different from his peers. I think the proposal was only for AI in sentencing , and the fact that AI could be blinded to the person in front of them (as in the classical statue of justice), would be an advantage.

  20. An interesting read, which highlights many issues that I enjoyed contemplating. “An AI system recently correctly predicted the outcome of hundreds of Human Rights cases with an accuracy of 79%” could be a decisive point in arguing either way. It would certainly require a lot more work, however using AI to produce an initial sentence could be used; judges would then need to confirm that they were in agreement with the sentence or justify reasons for changing it. Although this might not have the desired effect of shortening the timeline of criminal court cases, it could have the beneficial effect of reducing corruption and discrimination. Judges would have to justify their reasons for going against the AI initial sentence, revealing possible corruption to the wider world or highlighting bias and discrimination both to the wider world and to the judges themselves. The resulting sentences could be used in improving the AI training, however removing the human judge’s input is unlikely to be achieved as “AI cannot grasp the concept of virtues” and the “development of the legal system” would also come from human input. Overall, a decision would need to be made whether the benefits of reducing corruption and discrimination would outweigh the possible lengthening of the timeline of criminal court cases.

  21. Very interesting topic. While I do not believe AI is currently ready for use in real life applications, it could possibly reach such a stage in the near future. However, while AI could potentially provide an ‘impartial’ judgement on a case, I feel most people around the world would be extremely uncomfortable with a ‘computer’ making such critical decisions. I do see the potential with AI acting as an assistant to the judgement process by highlighting important evidence or lack thereof to the judge and jury, thus potentially reducing the time spent on each case.

    It will be interesting to see how the future pans out.

  22. It is awful that there are countries that have a judicial system which are affected by bribery and corruption. It would seem that AI would eliminate this problem making it more just , seeing fewer innocent, less well off people being sent to prison.
    All humans can be biassed, including judges. To have one’s judgement affected by the skin colour of a person or some thing else, can not lead to a just verdict. It would seem that AI would be impartial resulting in fairer sentencing.
    The present judicial system seemingly needs more time to process everything before reaching a verdict. Again AI would reduce this greatly which would be a great benefit to all parties involved, especially for the innocent. Added to that, the fact that the whole procedure would be shorter, would make it less expensive for the tax payer.

    However, AI could also make mistakes by not understanding social and ethical advances. AI would have to be constantly updated to avoid this. Maybe it will be possible one day.
    AI would have a higher rate of sentencing resulting also in greater numbers of innocent people being imprisoned. This is not justice. However, if it gets more criminals behind bars it would reduce crime and the suffering of victims. So, would there be a net reduction in suffering? If so, would AI be acceptable, even if it would still not be a perfect system? It would seem that AI has to make progress before it can be used. Maybe a combination of human and AI but then we reintroduce the possibilities of bribes and the imperfections of the human system.

  23. A very interesting topic, and one that I could agree with to some extent. Given how bad corruption might be, an AI judge would be of great value. However, the algorithm must be robust and able to account for more than just facts- body language of the defendant and the ability to understand the broader picture of any given situation (an AI won’t be able to account for everything).

    Personally, I would prefer for an actual judge to make the decisions and then had their decision run against that of the AI, and a conclusion brought about from that. The reason being that if the AI was faulted 20 years from the date of inception it could potentially mean that all the cases that were handled by it now require re-visitation. Furthermore, as mentioned, dealing with a human life requires a human touch.

    All in all, the authors have done a good job of capturing the topic and explaining the ethics of it in a clear concise manner.

  24. Thank you for the thinking exercise! Your arguments seemed balanced, but I am human and just didn’t want the logical solution to win. I began to believe as GFJ commented, that perhaps it might be helpful to introduce AI as a tool to advise judges with sentencing, requiring them to explain why there was a difference, if any, between the sentence they applied to a particular case and that which the AI suggested. But then, wouldn’t the AI be learning from the history of all those biased decisions and therefore be biased itself? Then I thought about the “potential increase in wrongful incarcerations”, which would be bad enough. But in several states in USA and in a number of other countries, a wrongful sentence could mean a death sentence. There has to be a human decision at some point in the process. We are flawed, but we understand what it is to be human.

  25. Agree with the author of this article. The conclusion seems like the way to go, though it is important to be very sure on the boundary of ‘sufficient AI’. How would we know if it is sufficient, or is it in humans scientific curiosity nature to improve AI, and is this something that has to be agreed internationally.

  26. I commented earlier, but just this morning I heard that even AI can be biassed! I guess that if it is programmed by imperfect humans, it is bound to have some of those same imperfections!
    Maybe the judicial system should aim to train and employ honest humans, seeking justice and truth and for like minded programmers to programme AI.
    How could this be achieved, is the question?

Leave a Reply