Should Robots Force You Behind Bars?!

Group 1


Artificial intelligence (AI) refers to programs that replicate human intelligence and decision making. The aim is to develop learning patterns and the ability to perceive situations or problems like humans. AI already plays a major role in the financial sector, where they process large volumes of financial activity and red-flag any transactions that seem fraudulent. AI has also found its place in the healthcare industry, aiding doctors for surgeries that require utmost precision. AI is also grabbing Industry 4.0 by storm with the vast increase of automation and data exchanges in manufacturing. However, while some automation is present in the legal profession, there still remains doubt as to whether expanding on AI would be beneficial or ethical [1].

Robots Should Have Full Control

Systemic Bias

In many cases, conscious or unconscious bias can play a huge role in the decisions made by the judge; however, with the use of an AI lawyer this can be avoided. An AI tends to base its decisions upon what is initially fed to the system (inputs); therefore by implementing a system for enhancing empathy and improving diversity, AI bias will be mitigated or avoided [2]. This clearly abides with duty ethics as no discrimination, whether color or gender, is endured.

ROSS Intelligence: First AI lawyer which collects and analyses leading cases.

Likelihood of Recurrence

AI systems have also been further developed to replace judges in scenarios that utilise minimal evidence. This usually corresponds to civil and some criminal claims where an algorithm, called COMPAS, is used to assess the recurrence of an offence. For instance, in Wisconsin v Loomis (2016, US criminal case), the defendant Loomis was charged due to a drive-by shooting. COMPAS was then implemented and allegedly quantified a high risk of re-offending and violence. Mostly due to the COMPAS report, Loomis was long sentenced. Loomis has then appealed to the Wisconsin Supreme Court but was rejected as the same sentence would have been placed without the implementation of the algorithm [3]. By considering duty ethics, the algorithm decision was right and was even morally approved by the judges.

Resource Allocation

The availability of AI systems in law up until now is very insufficient due to the high costs they incur. Hence, AI allocation must be executed depending on the case type, importance, and number of people affected by the case. Therefore, the ethical theory of utilitarianism and care is supported by this technique.

Legal Research and Prediction

Traditionally, lawyers would need frequent library visits for consultation with their cases, which led to trials spanning longer than they should. However, recently online legal resources such as LexisNexis and Ross Intelligence have developed smart AI algorithms known as online dispute resolutions (ODR) which allow lawyers to search similar cases based on a set of circumstances and predict the likelihood of a judge ruling in their favour or an offer they have proposed being accepted. This is achieved through a language learning software that reads and extracts relevant data; hence, allowing for the resolution of cases and disputes more efficiently [4].

Document and Intellectual Property Assembly

AI tools guide lawyers in the assembly of documents that are handed to a judge. This is achieved through software that matches the produced documents to standardised templates, and indicates missing sections or suggests better arrangement information. [5]

Robots Should NOT Have Full Control

Systematic Bias

Systematic bias forms a barrier to the implementation of AI in courts of law, since its utility and effectiveness relies on its programming [5]. Such internal biases can cause prejudice and racially discriminatory sentencing. This has been a recurring problem, with African Americans being  handed longer sentences for similar crimes committed by white Americans [6]. The reliance on digital data for sentencing also introduces risks of corruption in the form of data tampering to influence decisions. This presents a misalignment with the Duty Ethics framework, which has equality of treatment as an integral principle. Furthermore, the inevitable  lack of transparency between AI systems and court users is a violation of societal norms and virtues [7].

Example of Racial Bias in AI.

 Likelihood of Recurrence and Security

Some crimes can have a rare recurrence in the justice system, which would require human judges and a group of emotionally driven jurors. This leads to minimal data being available for the computer to process, hence AI systems employ data interpolation to reach final decisions. This can result in unfair decisions being made and could result in some cases being automatically dismissed without a trial. Moreover, privacy violations can occur as digital systems are susceptible to security breaches, and sensitive information can be leaked unintentionally [8]. Virtue Ethics would be difficult to implement in this case, as the actor is a lifeless computer making decisions, and hence accountability and responsibility would need to be redefined.

Resource Allocation

Encouraging the use of AI lawyers for people who can not afford a human lawyer would increase human consultancy costs and may result in a decline in the number of pro bono cases taken by firms, which are usually driven by emotion. Additionally, some cases would need subjective judgements instead of objective judgements due to their nature. For instance, a poor man stealing groceries to feed his family would be handed the same sentence as a man stealing a TV when using objective judgement, used by the computer [9]. This unique case would require human lawyers and judges to process this case humanely.

Impact on Employment

Implementing AI in law would create redundancies. There would be minimal need for paralegals as the computer can process information quickly. Judges and jurors could also be made redundant if computers were to replace judges. This jeopardises interpersonal relationships and neglects the importance of human interactions, which is a major emphasis of the care ethics framework. Finally, quantifying the pleasure or pain that results from the outcomes of this application of AI, the relatively materialistic theory of utilitarianism cannot be applied easily. Hence, conformity with virtue and care ethics is more reasonable in this case.

Initial Decision

Following research and discussions, Group 1 decided to stand against the full implementation of AI in final decision making, and should only be used in assisting professionals handling the case.


  1. Artificial Intelligence (AI), Jake Frankfield
  2. How to Mitigate Bias in AI Systems | Toptal
  3. Case Studies on the Use of AI by Judges and Legal Professionals (
  4. 5 Ways Artificial Intelligence Can Boost Productivity, Javier Jimenez
  5. AI in Law and Legal Practice – A Comprehensive View of 35 Current Application
  6. Why Artificial Intelligence is Already a Human Rights Issue | OHRH (
  7. Machine Bias There’s software used across the country to predict future criminals. And it’s biased against blacks.
  8. Inclusion And Ethics In Artificial Intelligence | by Debra Ruh | Medium
  9. Should legal disputes be decided by artificial, rather than human means?
  10. Can AI Be More Efficient Than People in the Judicial System?

12 thoughts on “Should Robots Force You Behind Bars?!

  1. Opening statement. The problem is stated but needed more development. You focused on where AI is used, which is understandable, but you needed a sentence or two more on the particular topic: using AI to determine sentencing. Or is it determining justice? There seems to be a clear dilemma, but is it about sentencing or judging?

    Arguments for: Good use of ethical theories. However, there were a number of sub-sections where a particular theory wasn’t explicitly named. Example: Legal research and prediction.

    Arguments against: Excellent use of ethical theories.

    Advice for Assignment Two: Have a think about the stakeholders – there are a number outside the obvious (such as defendants and courts). Try and clarify your actual topic: is AI being used to determine justice (acting as a jury) or to determine sentencing (acting as a judge).
    With regards to Options for action it looks you’ve identified one in the initial decision.

    Try and drum up more comments. I’m perfectly OK with you striking deals – whereby you comment on other articles and they comment on yours.

  2. As a mobile and autonomous systems expert I am familiar with the ethics of autonomous robots and I can completely agree on some of the theories stated in this mini article.

  3. Very interesting topic !! Both sides are well discussed. I agree with implementing AI in the court of law as it can contribute in assisting lawyers and with further development in such a field .We would be able to solve crimes easily and efficiently . Also, the ethics for AI in the court of law seem reasonable and meet the requirements .Was worth reading thank you

  4. Very interesting topic !! Both sides are well discussed. I agree with implementing AI in the court of law as it can contribute in assisting lawyer and with further development in such a field we would be able to solve crimes easily and efficiently. Was worth reading thank you

  5. Interesting read! This is actually a very hard topic to debate and many could agree or disagree with what I’m about to say but utilitarianism ethics in such a case would be very hard to take into account as no one would ever want to be put in jail just because a robot is programmed to do so and as mentioned in this article it is difficult to understand how an AI ends up making such as a decision. Thereby, I’m against this

  6. This is my comment “Very interesting topic, I liked the choice of the precise scope of AI in law instead of discussing the mainstream topic of AI taking over the manufacturing sector. I agree with your decision to stand against the total replacement of lawyers and judges with robots and algorithms. I found the systematic bias argument very convincing and supported by the relevant ethical theory.”


    Moataz Hegazy

  7. A very interesting read. Even though AI can be helpful in aiding decision making in a court of law, human input is necessary to ensure the best decisions are made as AI can never be programmed to be as understanding as humans are.

Leave a Reply