Artificial intelligence (AI) refers to programs that replicate human intelligence and decision making. The aim is to develop learning patterns and the ability to perceive situations or problems like humans. AI already plays a major role in the financial sector, where they process large volumes of financial activity and red-flag any transactions that seem fraudulent. AI has also found its place in the healthcare industry, aiding doctors for surgeries that require utmost precision. AI is also grabbing Industry 4.0 by storm with the vast increase of automation and data exchanges in manufacturing. However, while some automation is present in the legal profession, there still remains doubt as to whether expanding on AI would be beneficial or ethical .
Robots Should Have Full Control
In many cases, conscious or unconscious bias can play a huge role in the decisions made by the judge; however, with the use of an AI lawyer this can be avoided. An AI tends to base its decisions upon what is initially fed to the system (inputs); therefore by implementing a system for enhancing empathy and improving diversity, AI bias will be mitigated or avoided . This clearly abides with duty ethics as no discrimination, whether color or gender, is endured.
Likelihood of Recurrence
AI systems have also been further developed to replace judges in scenarios that utilise minimal evidence. This usually corresponds to civil and some criminal claims where an algorithm, called COMPAS, is used to assess the recurrence of an offence. For instance, in Wisconsin v Loomis (2016, US criminal case), the defendant Loomis was charged due to a drive-by shooting. COMPAS was then implemented and allegedly quantified a high risk of re-offending and violence. Mostly due to the COMPAS report, Loomis was long sentenced. Loomis has then appealed to the Wisconsin Supreme Court but was rejected as the same sentence would have been placed without the implementation of the algorithm . By considering duty ethics, the algorithm decision was right and was even morally approved by the judges.
The availability of AI systems in law up until now is very insufficient due to the high costs they incur. Hence, AI allocation must be executed depending on the case type, importance, and number of people affected by the case. Therefore, the ethical theory of utilitarianism and care is supported by this technique.
Legal Research and Prediction
Traditionally, lawyers would need frequent library visits for consultation with their cases, which led to trials spanning longer than they should. However, recently online legal resources such as LexisNexis and Ross Intelligence have developed smart AI algorithms known as online dispute resolutions (ODR) which allow lawyers to search similar cases based on a set of circumstances and predict the likelihood of a judge ruling in their favour or an offer they have proposed being accepted. This is achieved through a language learning software that reads and extracts relevant data; hence, allowing for the resolution of cases and disputes more efficiently .
Document and Intellectual Property Assembly
AI tools guide lawyers in the assembly of documents that are handed to a judge. This is achieved through software that matches the produced documents to standardised templates, and indicates missing sections or suggests better arrangement information. 
Robots Should NOT Have Full Control
Systematic bias forms a barrier to the implementation of AI in courts of law, since its utility and effectiveness relies on its programming . Such internal biases can cause prejudice and racially discriminatory sentencing. This has been a recurring problem, with African Americans being handed longer sentences for similar crimes committed by white Americans . The reliance on digital data for sentencing also introduces risks of corruption in the form of data tampering to influence decisions. This presents a misalignment with the Duty Ethics framework, which has equality of treatment as an integral principle. Furthermore, the inevitable lack of transparency between AI systems and court users is a violation of societal norms and virtues .
Example of Racial Bias in AI.
Likelihood of Recurrence and Security
Some crimes can have a rare recurrence in the justice system, which would require human judges and a group of emotionally driven jurors. This leads to minimal data being available for the computer to process, hence AI systems employ data interpolation to reach final decisions. This can result in unfair decisions being made and could result in some cases being automatically dismissed without a trial. Moreover, privacy violations can occur as digital systems are susceptible to security breaches, and sensitive information can be leaked unintentionally . Virtue Ethics would be difficult to implement in this case, as the actor is a lifeless computer making decisions, and hence accountability and responsibility would need to be redefined.
Encouraging the use of AI lawyers for people who can not afford a human lawyer would increase human consultancy costs and may result in a decline in the number of pro bono cases taken by firms, which are usually driven by emotion. Additionally, some cases would need subjective judgements instead of objective judgements due to their nature. For instance, a poor man stealing groceries to feed his family would be handed the same sentence as a man stealing a TV when using objective judgement, used by the computer . This unique case would require human lawyers and judges to process this case humanely.
Impact on Employment
Implementing AI in law would create redundancies. There would be minimal need for paralegals as the computer can process information quickly. Judges and jurors could also be made redundant if computers were to replace judges. This jeopardises interpersonal relationships and neglects the importance of human interactions, which is a major emphasis of the care ethics framework. Finally, quantifying the pleasure or pain that results from the outcomes of this application of AI, the relatively materialistic theory of utilitarianism cannot be applied easily. Hence, conformity with virtue and care ethics is more reasonable in this case.
Following research and discussions, Group 1 decided to stand against the full implementation of AI in final decision making, and should only be used in assisting professionals handling the case.
- Artificial Intelligence (AI), Jake Frankfield
- How to Mitigate Bias in AI Systems | Toptal
- Case Studies on the Use of AI by Judges and Legal Professionals (dji.gov.ae)
- 5 Ways Artificial Intelligence Can Boost Productivity, Javier Jimenez
- AI in Law and Legal Practice – A Comprehensive View of 35 Current Application
- Why Artificial Intelligence is Already a Human Rights Issue | OHRH (ox.ac.uk)
- Machine Bias There’s software used across the country to predict future criminals. And it’s biased against blacks.
- Inclusion And Ethics In Artificial Intelligence | by Debra Ruh | Medium
- Should legal disputes be decided by artificial, rather than human means?
- Can AI Be More Efficient Than People in the Judicial System?