FRT: Future Ready Technology or Far Reaching Tyranny?

Group 13


Facial Recognition Technology (FRT) refers to algorithms which estimate the degree of similarity between two faces. It is commonly used to verify a person’s identity or to identify an individual from an image and has been gaining popularity, with an expected 17.2% annual increase in the market from 2020-2025. FRT can be used to improve security and accelerate identification processes but this comes at the expense of a person’s privacy which introduces an ethical dilemma.


General happiness is not something easily quantified, but a utilitarian may speculate that the decrease in the happiness of a population as a consequence of potential security breaches has a greater cumulative magnitude than that caused by privacy uncertainty. Rather than the technology being just the ”lesser of two evils”, the public does benefit in that increased security enables greater certainty in planning for the future.

Additionally, there are clear benefits of facial recognition biometrics for individuals as the technology can allow them to identify themselves for various purposes more efficiently. This can save time when boarding a plane for example, where a large number of people need to be identified. The US flight operator Delta Airways claim that their new facial recognition system can reduce the boarding time for a plane of 270 passengers by around 9 minutes, which when scaled to Heathrow airport’s 1300 flights per day could mean a time saving of 195 human hours per 24 hour period. Another example is when opening a bank account; again if a person can be identified more quickly they can more easily and efficiently open a new account. From a utilitarian perspective, the improved efficiency of these processes is a net benefit to the user or customer.

Following a Kantian approach; a government has an ethical and (in the UK) legislative duty according to the moral principle that as a representative and governor, it should act in the best interests of the people. The use of this technology is a means of doing so, and is led by the categorical imperative that the population is to be kept safe.

To a degree, negative consequences are inherent to technology. However, the risks associated with the reduction of public privacy can be managed and mitigated in that it is a continuous, anticipated problem. It could be argued that this method of ensuring security is better than the alternative: rather than the public being subjected to the opinions of those who monitor and use CCTV footage, an AI program treats everyone with impartiality.

So, the benefits outweigh the risks. The risks and advantages of the public application of facial recognition technology are distributed evenly throughout the population: the public may experience a decrease in privacy, but everyone will benefit from increased security. With the police able to better manage their time and resources, a proportion of the population may – knowingly or not – also benefit in that they were not subjected to harm that may otherwise have occurred as a consequence of inefficiencies in hazard identification. Hence, the associated risks are acceptable for the application of FRT.


A primary issue with facial recognition technology is the lack of consent on the recordees end but this affects more than those being directly targeted. To create facial recognition technology that works, it needs to be trained on hundreds of thousands of people and images. The images used to train the technology  have often been uploaded online or – in some cases – been taken from public surveillance footage without consent. A variety of organisations go on to use these data sets, from large international corporations to state police forces. These images can be used to collect biometric data – already a highly contested issue – and track people’s movements through a location. Even in its use in criminal identification vast numbers of innocent people have their faces scanned to track one individual, in one instant in the UK, 500,000 civilian faces were scanned to lead to 11 arrests. As opposed to Germany and France’s volunteer tests, the UK conducted operation trials on real-life suspects, skipping consensual engagement.

From a utilitarian perspective, if those scanned by FRT haven’t given informed consent and their privacy has been compromised, the people negatively affected far outweigh those benefited by it and this is considered as unethical. In the UK – for example – most people want some restriction on FRT, and if the government is there to reflect the population’s interest then restriction of FRT should be implemented on a utilitarian notion. Kant stated that humanity should be used “never simply as a means”. Users of this technology have knowingly invaded another’s privacy, as a means to increase security which is clearly contrary to Kantian ethics. The right to privacy does not justify the increase in security if it is at the expense of the individual.

It has also been argued that this technology has directly enabled the Chinese government to persecute Uyghurs in the Xinjiang region. Research contributing towards these developments has been described as “ethically indefensible” with calls to remove the research from publication. UK police have faced legal issues concerning FRT, one case ended with the presiding judge overruling the police’s use due to “too much discretion”. 2 months prior to this, the UK information commissioner had warned that none of the deployments she had examined were fully compliant with data protection law.

Those in possession of the technology are usually a select group who’s actions impact a large number of the population, the utilitarian argument does not support the benefit of the few at the expense of the many. FRT has also been used as a weapon to victimise innocent civilians and even if there is a legal system in place to provide protection, it can be bypassed. If FRT is going to be knowingly abused by the operators, Kantianism would oppose this.  Applying the categorical imperative, it would be said that it is against our duty to take part in such immoral actions and allow this to happen.

Initial decision


2 thoughts on “FRT: Future Ready Technology or Far Reaching Tyranny?

  1. Feedback
    1. Clarity of problem/dilemma
    I wasn’t fully clear on the opposition to facial recognition technology. You mention privacy concerns, could you explain what the particular concern is, please?

    2. Use of ethical theories in the For Case
    You made use of Kant’s theory and Utilitarianism, which makes sense, but could you have used virtue ethics too. Wanting to protect the safety of people seems like a commendable virtue to me.

    3. Use of Ethical theories in the Against case
    You use Kant’s theory and Utilitarianism, but again could you have used virtue ethics or care ethics?

    4. Advice on Assignment Two
    a. Identifying stakeholders
    b. Courses of action
    With regards to the stakeholders, the obvious one is the public, but don’t forget the companies that supply the technology too.
    Some sort of win-win seems like the best course of action.

    5. Personal remarks
    As a lecturer, I’ve had a number of recordings made of me. My lectures and sometimes my supervisions have been recorded. I’ve not really had a choice in the matter, as with recorded lectures I’m told it’s for the students’ benefit and with supervisions I’m sometimes not asked if it’s OK. Personally, I don’t mind, but it’s interesting how we can capture another’s image without their consent. To some extent, we all do this when we go out in public. Subconsciously, or consciously, we scan faces on the off-chance that we might see a friend, or avoid a threat.
    I wonder if the question isn’t how we obtain the data but what we do with the data once we’ve got it.

  2. This is a very interesting topic.
    It is true that it can be annoying to use FRT scans to access information about people without their consent, especially for those in private organisations. But I think it would not be bad if the government could control it well and use it to increase the security of the public substantially.

Comments are closed.