Predictive Policing: Will cameras become racist?

Group 32

No New Faces

For decades, technologies have been helping law enforcement to solve crime, it therefore doesn’t come as a surprise that, presently, machine learning has become the preferred method to tackle crime. However, using data analysis technology to predict crimes before it happens is a relatively new idea. Such a possibility may seem like a plot from a Hollywood movie – Minority Report, which depicts a futuristic society in which people are arrested before they commit a crime. But this is no longer the stuff of science fiction. Companies, such as IBM, have already started to use AI technologies with surveillance cameras to predict crime before it happens. The software used in such AI is based on the military and governmental security systems, which can build a profile of an individuals in real time, by looking at their micro-expressions – “minuscule twitches or mannerisms that can belie a person’s nefarious intentions” and predict how likely the person is to commit a crime. If the probability is high, law enforcement agencies will take a pre-emptive measure. This data fuelled analytics is called “predictive policing”.

Policeman’s Third Eye

Any victim of a crime will say it was a horrible experience, and each of the 4.5 million offences last year will have a victim. There is no doubt that crime is a scourge on society and an ability to prevent them from occurring could be invaluable. Victims may suffer injury and loss, as well as trauma from witnessing a shocking incident, and many victims will be scarred for life. The effects are not limited to the human experience, the UK police force requires £12.3 billion per year to operate; a significant portion of the government’s budget. It may be morally wrong for the police to watch all these people suffer knowing that technology is available that could have prevented it. 

A particular problem faced by the authorities are lack of substantial evidence and often a lack of suspects, for example 75% of theft cases were closed last year with no identified suspect. Facial recognition and data analysis may provide a more just system, giving the police tools and data to make more informed decisions. 

The utilitarian argument would strongly support this action. It would state that the morally right action is one that maximises the happiness of all and therefore a tool to prevent crimes would maximise the happiness and quality of life of the general population.

Society’s Deterrent  

As a crime predictive software is implemented into a city, and is continuously developed; it will see an increase in its success rate. As the success rate increases, more and more businesses and communities will be willing to have such systems in place in their vicinity. This has been seen in Detroit, where the introduction of a rudimentary crime predictive software (called ‘Project Green Light’) has resulted in a 50% decrease in violent crime. Consequently, many small businesses and residential communities have shown their support for this Project and are now investing in its development. With increased support from the local community, combined with a higher success rate – a crime predictive software will serve as a deterrent against any criminals or would-be offenders.  

A deterrent has often been proven to be a more effective measure against criminals than heavier policing. Deterrents are based upon a branch of Utilitarian theory, called Deterrence Theory. It was thought that the threat of retaliation would be enough to scare away the majority of potential wrongdoers.  

But… do we really want to live with Big Brother?

The implementation of these surveillance system could open doors for a cyber threat from terrorist organisations. According to U.K GCHQ “cyber security would be of a prime concern if AI surveillance techniques were to be adopted”.  If these systems were to be hacked, the amount of sensitive data stored on such system could be compromised, breaching data protection regulations. Furthermore, cyber criminals could manipulate these systems to create false-positives i.e. detecting people who are unlikely to commit a crime as potential criminals, and vice-versa, turning the system against innocent people. Moreover, researchers from Cornell University have found that the use of such AI surveillance systems could create a feedback loop that reinforces institutional bias. Bias isn’t the only problem, says data ethics experts as some of the data collected by the police are incomplete, inconsistent, and inadequate; and with machine learning: ‘garbage in equals to garbage out’.  The facial recognition and prediction software are expected to formulate judgements on how likely an individual is to commit a crime by comparing what it picks up on this individual’s face against certain criteria which define someone who is about to commit a crime. However, these criteria are far from easily defined, and in the age of Big Data, they will automatically be based on huge amounts of past security camera footage. 

Therefore, as machine learning develops, based on past occurrences, in the event of there being an initial institutional bias, whereby a society as a whole, judges an individual more likely to commit a crime than another solely based on physical attributes such as ethnicity, it may be possible that the bias is reinforced by the feedback loop used in machine learning algorithms. Furthermore, the Metropolitan police has been named ‘still institutionally racist’ by Black and Asian officers. Despite the uncertainty, there is a real threat of the technology using biased data. In addition, in the likelihood of this occurring, the individuals which are targeted by this bias, have their right to a private life: ‘to live your life with privacy and without interference by the state, breached under Article 8 of the Human Rights Act 1998, as they would be more likely to have their lives interfered with than others for no valid reason. In terms of Duty Ethics, which defines an action as morally right if it agrees with either a law, rule or norm, would set the use of facial recognition software as morally wrong. Central to Kant’s Theory, is the consideration whether one group of people is treated in the same ways as other groups, meaning the facial recognition software has the possibility of doing the opposite.

Initial Judgement

The idea of predictive policing could potentially help police to stop crimes from taking place but implementing such technologies would be stepping into an ethical minefield which the police are not prepared for.  Thus, the authors of this article were thoroughly against implementing facial recognition software in smart cities to predict crime. 

57 thoughts on “Predictive Policing: Will cameras become racist?

  1. It’s amazing to see how computers are now being manipulated to learn poor human behaviour. As an engineer, we should avoid creating an AI system which creates such institutional bias. I believe companies such as IBM should refocus their objectives toward other technology rather than these.

    1. Hi, guys thank you so much for showing such interest in our article. Its not just IBM, companies such as Predpole had introduced their AI camera to Kent police. However, the UK government has now stopped using these camera, as they were not able to justify the cost over the actual reduction of crime rate.

    2. “As an engineer, we should avoid creating an AI system which creates such institutional bias. I believe companies such as IBM should refocus their objectives toward other technology rather than these.”

      100% we should avoid institutional bias but there are ways of doing this in these algorithms, for example weighting the training data so that its representative of the general population. We already have bias in our institutions, these algorithms could provide a way to automate processes and remove the individual biases we all have. In the UK we can see in court data that harsher sentences are given out in the afternoon (likely tired grumpy judges), algorithms don’t have these problems.

  2. I completely agree with the groups decision to say no to such projects. The first issue is that how reliable is the data? and often time such AI system is based on a deep learning neural network. Having a wrong data would indicate that potential criminals are neglected while hard working civilians are tracked.

    The cyber threats are also quiet significant, recently India CCTV system were hacked by Chinese hackers. So, this is unethical in my prospective.

  3. This is disgusting! how can companies such as IBM agree with this morally unjustified projects. I totally agree that such technologies should not be implemented

  4. Very good idea to use AI technology to predict and prevent crimes before they have happened. There are a lot of CCTV already installed in the cities and implementing such technique might not be that challenging. However, is it ethical to prosecute anyone based on proof extracted only from an artificial system?

      1. This is exactly my though, I might think of stealing a bike but then before i do that i might stop from doing so but if the camera had saw me when i was think about doing so, how can they convict me ?

        1. Thank you for your comments
          In the UK especially they were using this technology by following the predicted criminals and waiting actually for them to commit the crime. But, imagine how much of resource is necessary to complete this.

          1. It may be resource intensive but all proper policing is. The alternative actually requires more policing as if it’s less targeted you need a larger presence on the street. The reason the Police are using systems like this is because they are under-funded and under-resourced.

  5. In some aspect this is good as this will help to save public funds used on policing, rather than profiling Statistical prediction methods on predicting crime location would be beneficial

  6. AI should not be used to judge people. This system is purely judgemental. However, as in the movie Minority Report, people could misuse such systems for their own benefits. You never know what such technologies could do to our community. And i don’t believe in judging people do you?

    1. Thank you Ben,

      Surely no one should be scrutinised based on their racial features. I would rather like this technology if these could predict the location of the crime, rather than actual person.

  7. In one hand, the system is helping the community to become safer by predicting the crime before it happens, but in the other, it is helping AI systems in becoming racist. I say this because how else could a camera detect a likelihood of a person creating a crime? for eg. News these days have manipulated civilians to think that terrorist are from the Islamic world but we know that is not true. But, how can a system think for itself, it all depends on the mindset of the programmer who is writing or developing such Neural Networks. So, I think such system out-weights its benefit.

    1. “it is helping AI systems in becoming racist. I say this because how else could a camera detect a likelihood of a person creating a crime?”

      There are lots of other ways, primarily focusing on behaviour. Is someone pacing in an airport, frequently looking towards security cameras and has a concerned look on their face? That’s when these algorithms would flag the individual and pass it on to a human to review the footage.

      “But, how can a system think for itself, it all depends on the mindset of the programmer who is writing or developing such Neural Networks. So, I think such system out-weights its benefit.”

      No it doesn’t, it depends on the training data given to the model and how you decide to ‘reward’ that model whilst it’s training. For example there was a recent case in a hospital with automatic soap dispensers (which detected the presence of a hand using visual cues), because the creators of the system trained it on white hands it performed very poorly with people of colour, the programmer could have been the biggest black rights activist in the world but because their training data was biased it didn’t matter.

      What the programmer needs to do (and all good programmers do) is resample the data so that there isn’t an imbalance between the different sample ‘classes’. Going back to the airport example what this would mean is the algorithm would be given training data where the criminal samples were equally weighted by different ethnicity backgrounds according to say the general distribution throughout the population.

      1. Totally agree AI_overlord! This is why it is useful but somewhat scary.
        We design these machines to do hard and tedious jobs for us, without thinking how far these can go…

    2. Hi Elma,

      In a way, machine learning algorithms are built and coded to think for themselves. The algorithm is able to detect patterns it observes and if the outcome of a certain pattern correlates with what we are looking for (in this case: criminals) it ‘learns’ it or in other words saves it in its memory. Therefore, to a certain extent it is not all up to the programmer. However, what is in the hands of the programmer: is the data with which the algorithm will learn these patterns, so it is primordial that we ensure that this data does not hold any bias.
      But, I think this is impossible, as you say: if society believes most terrorists come from Islamic background/world, this may well be reflected in the data provided to these algorithms…

      If this technology is to be implemented, which unfortunately I think may well be at some point in the next 100 years, it should be balance with very high/frequent checks of some sort that no bias is occurring.

      Thank you for your comment!

      Theo

      1. I am from the group, and yes it literally anything to do with the face, so as you say the physical attributes but also more importantly micro-expressions, which may indicate high stress for example, or more generally, the expressions that would be expected from someone who is about to commit a crime.

        The problem is this, is that in reality no one actually chooses these parameters indicating that someone will commit a crime. The algorithm is fed with training data, from which it builds patterns between the micro-expressions on an individuals face and the fact that this individual actually did go one to commit a crime afterwards.

        An example could be (this has completely been invented just for the sake of explanation):
        The algorithm has found that 92% of people showing cold sweats (not sure this is possible but anyway) go on to commit a crime. Therefore, when the system analyses an other individual displaying cold sweats, it will raise his score of likelihood of committing a crime. If this individual accumulates these parameters raising his score, the system might raise the alarm.

        The system becomes an issue if your score simply goes up because of your skin colour for instance.

  8. Good article with section for and against. What i like the most is the point that who is going to take the blame if these systems were to malfunction: Is it the manufactures or the government ? so i dont think we are ready for such technologies yet.

  9. cameras are already being used in most public places so using facial recognition to locate a fugitive for example, that could cause harm to the public, could be seen as ethical. However the idea of detecting ‘minuscule switches or mannerisms’ to determine a persons intentions could cause more trouble compared to deterring crime. I do not think the technology is at a stage to make decisions like this, for that reason I would say that this idea is unethical.

  10. I have the impression that AI used in this sector will be always too biased. Different models are needed in different countries and in different sectors. The choice of the training data by the programmer will have a huge effect on the model outcome. Is there any way to standardise the choice of the training data? How would you define a reliable model for facial recognition and micro expression recognition? As millions of people are involved, testing the quality of the model is essential… how do you do this?

    1. “I have the impression that AI used in this sector will be always too biased”

      How come? One of the things we know about technology is that it’s rate of progress doesn’t appear to be slowing down, 15 years ago automated facial recognition was incredibly difficult, now your phone can pick you out from thousands of other people. Why is is that AI (which has been progressing quicker than the majority of other tech) won’t follow this trend and see significant improvement in the near future?

      “Different models are needed in different countries and in different sectors.”

      Currently this is true and we’re seeing that in the models being created, Facebook for example has country specific models which they use in their automated photo tagging. Looking forward we’re likely to see more generalised models which are able to pick up on subtleties like country and sector and change their output accordingly.

      “The choice of the training data by the programmer will have a huge effect on the model outcome. Is there any way to standardise the choice of the training data?”

      Yes it does and the way we’re going to remove bias in the models is by removing it in the training data. There are lots of approaches, one of the most common is resampling the data so that its distribution of say ethnicity is representative of the environment the model will be deployed in.

      “How would you define a reliable model for facial recognition and micro expression recognition?”

      By getting people to manually label a data set of photos with say ‘brow furrowed’, then train a model to detect that and label unseen photos, then train a model to decide if that feature increases the likelihood of a crime. Tesla has huge offices in Kenya where people are labelling videos and highlighting objects such as cars which is then fed into their self driving algorithms. Amazon has a service where you can advertise this sort of work – https://www.mturk.com/.

      As millions of people are involved, testing the quality of the model is essential… how do you do this?”

      There are a wide range of methods you can use to test the models. A simplified approach to what they’re likely doing in these models is separating a bunch of video clips into a testing and training dataset, with each of the videos labeled with ‘crime’ or ‘no crime’. They then train the model on the training data and test it against the unseen data to make sure the model isn’t overfitting. If it gets 80% of the crimes correct then we can say it’s 80% accurate. More likely the models take a probabilistic approach and the output would be along the lines of “there’s a 75% chance a crime may be about to be committed”.

      1. I am mostly responding to your third point.

        I agree that re-sampling the training data would help if we can be sure that the data is again unbiased. We might be able to get a perfect and even coverage of all ethnicity, sex, age etc… but who tells us that the individuals in this training data were fairly judged.
        What I mean is that if in this data, police were more likely to arrest people of darker skin colour (I use this example because it has been a big topic of controversy in history) this will be ‘learnt’ by the algorithm.

        But I agree that in any machine learning algorithm, the use of validation data is primordial to test the reliance of the model, but then again if the validation says that darker skin colour people are more likely to commit crimes than other individuals, given that the same micro-expressions have been picked up for darker skin and light skin, the model will be validated with the bias in it.

        Also, however, I think if we wanted to we could potentially create our training data whereby, hours and hours would be spent by ‘real’ humans from all possible backgrounds/sex/age/opinions/political orientation, discuss together to try and obtain the perfectly unbiased data… but this might be difficult.

        Thanks again for raising very interesting points Fran and AI_overlord!

        Theo

        1. “I agree that re-sampling the training data would help if we can be sure that the data is again unbiased … if in this data, police were more likely to arrest people of darker skin colour (I use this example because it has been a big topic of controversy in history) this will be ‘learnt’ by the algorithm.”

          This is where resampling comes in, if you weight the training data so that an equal number of black and white people are arrested then it won’t be learnt by the algorithm.

          We can actually take this one step further. If we use this new model on the original (non re-sampled) historic data it will flag white people it thinks should have been arrested, a human can then go through what the model flags as a false positive and take a view as to whether it should have been labelled differently in the first place. We can then use this new re-labelled data to train an improved model, iterating this process will leave us with a far less biased model than what humans currently do.

      2. Hello,

        Thanks a lot for your answers. My questions were purely out of curiosity as I know a bit of machine learning. At the status quo, would you feel confident to apply AI to this field? Do you think that more research needs to be done as it is such a delicate topic?

        1. No worries, thanks for the comments.

          Do I think AI should be used to make decisions as to whether someone has/will commit a crime? No. Do I think AI should be used to flag whether someone has/will commit a crime? 100% Yes.

          Police are massively under-resourced and any tools which can help reduce their workload should be welcomed. Filtering video footage and passing on moments which contain likely occurrences of crime will aid in this considerably.

          No model is 100% correct but many are helpful.

          1. *Do I think AI should be used to flag whether someone has/will commit a crime? -> Do I think AI should be used to flag whether someone might have/could commit a crime?

  11. I’m on the fence. We’re already well down the road of secret surveillance and it’s not all bad. The dilemma is how is the information used and who by. Could we trust the police with this technology at present? Seems like there is potential for both good and bad…. As usual.

    1. Hi Kevin,

      That is another very valid point! In the instance where we can trust the police and the justice system at 100%, then I think by all means go ahead. However, the next big question this topic raises is: what do we do once our system is in place and we assume works with no flaws or bias?
      Assuming the system is 100% accurate, then each alarm raised by the program means someone will commit a crime, therefore why shouldn’t we arrest them as if they had committed the crime?

      Then, if we add is the uncertainties…

  12. This kind of technology will raise a lot pf ethical issues. On the one side the world becomes safer (or country) and on the other, this could take our freedom!

    Great debate!

  13. Can you provide other ethical support in the cases for and against please?
    What you’ve provided so far, in terms of ethical support, seems the most appropriate, have a think if other theories can also be used.

Leave a Reply