No New Faces
For decades, technologies have been helping law enforcement to solve crime, it therefore doesn’t come as a surprise that, presently, machine learning has become the preferred method to tackle crime. However, using data analysis technology to predict crimes before it happens is a relatively new idea. Such a possibility may seem like a plot from a Hollywood movie – Minority Report, which depicts a futuristic society in which people are arrested before they commit a crime. But this is no longer the stuff of science fiction. Companies, such as IBM, have already started to use AI technologies with surveillance cameras to predict crime before it happens. The software used in such AI is based on the military and governmental security systems, which can build a profile of an individuals in real time, by looking at their micro-expressions – “minuscule twitches or mannerisms that can belie a person’s nefarious intentions” and predict how likely the person is to commit a crime. If the probability is high, law enforcement agencies will take a pre-emptive measure. This data fuelled analytics is called “predictive policing”.
Policeman’s Third Eye
Any victim of a crime will say it was a horrible experience, and each of the 4.5 million offences last year will have a victim. There is no doubt that crime is a scourge on society and an ability to prevent them from occurring could be invaluable. Victims may suffer injury and loss, as well as trauma from witnessing a shocking incident, and many victims will be scarred for life. The effects are not limited to the human experience, the UK police force requires £12.3 billion per year to operate; a significant portion of the government’s budget. It may be morally wrong for the police to watch all these people suffer knowing that technology is available that could have prevented it.
A particular problem faced by the authorities are lack of substantial evidence and often a lack of suspects, for example 75% of theft cases were closed last year with no identified suspect. Facial recognition and data analysis may provide a more just system, giving the police tools and data to make more informed decisions.
The utilitarian argument would strongly support this action. It would state that the morally right action is one that maximises the happiness of all and therefore a tool to prevent crimes would maximise the happiness and quality of life of the general population.
As a crime predictive software is implemented into a city, and is continuously developed; it will see an increase in its success rate. As the success rate increases, more and more businesses and communities will be willing to have such systems in place in their vicinity. This has been seen in Detroit, where the introduction of a rudimentary crime predictive software (called ‘Project Green Light’) has resulted in a 50% decrease in violent crime. Consequently, many small businesses and residential communities have shown their support for this Project and are now investing in its development. With increased support from the local community, combined with a higher success rate – a crime predictive software will serve as a deterrent against any criminals or would-be offenders.
A deterrent has often been proven to be a more effective measure against criminals than heavier policing. Deterrents are based upon a branch of Utilitarian theory, called Deterrence Theory. It was thought that the threat of retaliation would be enough to scare away the majority of potential wrongdoers.
But… do we really want to live with Big Brother?
The implementation of these surveillance system could open doors for a cyber threat from terrorist organisations. According to U.K GCHQ “cyber security would be of a prime concern if AI surveillance techniques were to be adopted”. If these systems were to be hacked, the amount of sensitive data stored on such system could be compromised, breaching data protection regulations. Furthermore, cyber criminals could manipulate these systems to create false-positives i.e. detecting people who are unlikely to commit a crime as potential criminals, and vice-versa, turning the system against innocent people. Moreover, researchers from Cornell University have found that the use of such AI surveillance systems could create a feedback loop that reinforces institutional bias. Bias isn’t the only problem, says data ethics experts as some of the data collected by the police are incomplete, inconsistent, and inadequate; and with machine learning: ‘garbage in equals to garbage out’. The facial recognition and prediction software are expected to formulate judgements on how likely an individual is to commit a crime by comparing what it picks up on this individual’s face against certain criteria which define someone who is about to commit a crime. However, these criteria are far from easily defined, and in the age of Big Data, they will automatically be based on huge amounts of past security camera footage.
Therefore, as machine learning develops, based on past occurrences, in the event of there being an initial institutional bias, whereby a society as a whole, judges an individual more likely to commit a crime than another solely based on physical attributes such as ethnicity, it may be possible that the bias is reinforced by the feedback loop used in machine learning algorithms. Furthermore, the Metropolitan police has been named ‘still institutionally racist’ by Black and Asian officers. Despite the uncertainty, there is a real threat of the technology using biased data. In addition, in the likelihood of this occurring, the individuals which are targeted by this bias, have their right to a private life: ‘to live your life with privacy and without interference by the state, breached under Article 8 of the Human Rights Act 1998, as they would be more likely to have their lives interfered with than others for no valid reason. In terms of Duty Ethics, which defines an action as morally right if it agrees with either a law, rule or norm, would set the use of facial recognition software as morally wrong. Central to Kant’s Theory, is the consideration whether one group of people is treated in the same ways as other groups, meaning the facial recognition software has the possibility of doing the opposite.
The idea of predictive policing could potentially help police to stop crimes from taking place but implementing such technologies would be stepping into an ethical minefield which the police are not prepared for. Thus, the authors of this article were thoroughly against implementing facial recognition software in smart cities to predict crime.