Deepfake: The Age of Disinformation

Group 1

Deepfake is generative media whereby machine learning and AI is utilised to manipulate or create visual or audio content. It’s most commonly used to superimpose someone else’s image and video to that of another person’s will. This article discusses the ethics of deepfake implementation.

Techpanionship

Deepfakes facilitate the creation of virtual companions that can potentially combat the loneliness epidemic seen worldwide. Loneliness will soon outstrip obesity as a public health crisis with 61% of Americans reporting that they are lonely. It can be argued that the joy of human companionship cannot be easily replicated as humans genuinely choose to spend time with you. Additionally, if the reciprocation of feelings aren’t considered genuine then loneliness could be exacerbated instead of alleviated. Deepfake technology also enables individuals to relive a moment with a deceased relative/friend through replicating their voice or image, allowing a seemingly real conversation to take place. However, is this solution to death only just extending the grieving process or in some cases even preventing this natural human behavioural process from taking place.

From a care ethics perspective, promoting the well-being of care-receivers is fundamental, implying that the action of providing a virtual companion is in itself ethical as its intention is to care for those who are vulnerable to feelings of loneliness or grief. Furthermore, from a utilitarian standpoint any companion is possibly better than no companion at all. However, virtue ethics is concerned with the virtuosity of the actor implementing the technology. From this ethics viewpoint, the implementation is subject to commercialisation which undermines the morality of its usage, since motivation by profit is not considered virtuous. Duty-based ethics judge right and wrong based on the morality of the motivation of the action. According to research, seven cross-cultural moral rules have been founded. Two of them are ‘help your group and family’. Based on these moral rules, using deepfake technology to create virtual companions is ethical since it facilitates the support of comforting family members not only through loneliness but also grief.

Breathing Data

Deepfake can increase the accuracy and applicability of AI technology. To use AI effectively, the amount of input data is important. In some areas, it is difficult to collect a large amount of image data for instance brain cancer scans due to the small sample size of cases. To overcome this limitation, companies have collaborated to create fake brain MRI scans. Combining 1:9 real to fake data in AI learning provides an almost equivalent result to 100% real data. Following aConsequentialist view, deepfake increases the efficiency in which AI can be used to analyse medical scans, contributing to the net well-being of patients through more reliable diagnosis. Duty-free ethics also supports this application because the intent for using deepfake within the medical field is to save lives which also aligns with care ethics.

Faking It

Deepfake as of recent is no stranger in the pornographic film industry. Some examples of deepfake pornography have been used on celebrities such as Gal Gadot and Emma Watson. Hedonism, which is the maximization of one’s pleasure, should in theory justify its means. With consent, no harm is inflicted towards any party regarding their privacy, reputation or personal rights. This agrees with the Freedom Principle, stating that people have the right to pursue their own source of pleasure provided that they do not inhibit the pleasure of others.

We can argue deepfakes could be considered as sexual fantasies, which is no more than a virtual image. Much like a sexual fantasy, the materiality of deepfake pornography content and to answer whether it is permissible or not could be debated. Henceforth, does not necessarily lead us to a black and white answer on whether deepfakes, when used as entertainment value, are ethical or unethical. 

Moreover, deepfake pornography could increase the crime of sextortion. Non-consensual content created to blackmail, humiliate or harm is an exploit of deepfake pornography. From a study, about 96% of deepfake videos are pornographic and many have been used to victimize women, such as the Bella Thorne case. A paper discusses critically on a societal impact that deepfakes favors men more than it favors women which is against egalitarianism; the priority of equality in society. Deepfakes condemn women to be treated as sexual objects more so than men.

To Trump or Be Trumped

Deepfake can potentially have a profound impact on the political process of liberal democracies. In 2018, a deep fake video surfaced of former US president Barack Obama, demonstrating the ease at which a deepfake can be used to manipulate viewers through disseminating false information. This can impact democratic processes such as electoral campaigns, pose threats to national security by prompting militaries to act on bad information and therefore in a utilitarian view, is considered unethical due to the far reaching global consequences. Deontologists also warn that deepfake used in political campaigns is a precarious decision because the right and wrong over-depend on the individual motivation for using it. This was exemplified during the emergence of Naziism and Japanese imperialism, where utilitarianism idea polarised into totalitarianism

Age of Deceit

Deepfakes only serve to undermine the trust in information and journalism, potentially leading to an age where humanity can no longer determine the credibility in a medium’s content, contributing to the fragmentation of our public discourse. However, it can be argued that the problem is a technical one, whereby most deepfake research is already seeking methods to detect its use. A notable example is Deepfake Detection Challenge, a coalition of leading tech companies currently seeking innovative new technologies that can help detect deepfakes. On the contrary, it can be argued that detection techniques are creating a “Moving Goal Post” whereby with new detection algorithms, new methods of avoiding detection arise.

Despite the negatives, creating deepfakes is a freedom of speech which is inherent to the Principle of Individual Freedom. By censoring the ‘misuse’ of deepfakes it is threatening our fundamental right to communicate an opinion through voice or video, regardless of its intent.

Initial Decision

We are for this technology.

8 thoughts on “Deepfake: The Age of Disinformation

  1. This is a good article, with excellent use of ethical argumentation.

    I’m personally against deepfake as there is too much capacity to cause harm. One of our primary challenges as humans is to determine what is real, knowingly creating something we know is unreal will disorientate us. Creating something for another that we know to be fake but they don’t (or may not) know to be fake is lying by another name. (Am I just being grumpy?)
    From a utilitarianism standpoint the decision that gives the greater number happiness is desired but ultimately can we be happy when we interact with a fake?

    If we interact with the representation of a deceased loved one are we just delaying the necessary process of grief? We’re just denying our reality, aren’t we? There’s a difference between suspending reality when we read a book, watch a film or play a game but, of course, there are cases where even those past-times becomes an opportunity to deny realtity.

    Good, provoking article. Well done!

    Finally, what is “Duty-free ethics”? 🙂

  2. From my standpoint, the technology is already extremely accessible to anyone to with access to a personal computer, making its use extremely difficult to regulate and control. The problem itself is related to how this technology can be regulated in a way to minimise its misuse, whether that be by legal, technological or educational solutions. If there are positive benefits to society in certain applications of its use as shown in the article, then it’s something which can encouraged and promoted to balance out the net well-being to society.

    Ultimately, like many inevitable technologies, it must be steered into a positive direction, otherwise only the negative misuse will exist, despite how much resources are put into regulating its misuse.

  3. Interesting topic! In my opinion, I don’t think it is beneficial to use deep-fakes to relive moments with our loved ones as this does not help with the user’s mental and emotional state. Using this won’t help them to move on and they will never learn to let go as they will always use this as solution to cure their longing feelings to the lost ones.

    Also, I see more harm than good in using this technology. Hackers and scammers can easily manipulated so many people into their benefits.

    Anyways, I love this article ad good job!

    1. I agree that using this technology or any technology for that matter to solve grief or loneliness is no a viable solution. Addressing the underlying problem that causes these issues within society I believe is a better approach. This therefore undermines the positive grounds of deepfakes and therefore the negatives really do override.
      Thank you for your input!

  4. I wasn’t aware of the potential benefits to this technology, interesting read. Personally, as you mentioned within the article, the global consequences really are far reaching and that potential to cause more harm than good is what really sways the argument in my opinion. In a consequential sense, this technology is thus unethical. Even with the positives of medical applications or virtual companions, there net contribution to society cannot justify the prevalent misuse that already occurs with this technology.

  5. Every time a new technology appears, we need to question the impact it will have on us and on the people around us. We cannot rely on governments and laws for guidance, as they are notoriously ineffective and inefficient at keeping up with the pace of progress.

    Now that this knowledge is both available and accessible, our feel as though our best bet is to educate people so that they can understand the potential of face-swapping/deep fake, including its consequences on others.

    Good thought provoking read nonetheless.

  6. Other comments suggest an educational approach to solving the problem, which I do agree with. However, the way in which this misuse is occurred is within the medium of social media sites. Big tech firms such as Facebook and Twitter have an enormous set of moral and political responsibilities. An industry-wide commitment to basic legal standards, significant regulation and technological ethics to tackle the problems of deep fake, would go a long in mediating its misuse.

  7. Thank you all for your points of view! these are all valid points which really validate my views as well.

    As suggested in the article, a place we can turn to solve the problems of technology is itself technological. So yes, big tech firms do have a huge responsibility in regulating the unethical misuse and so does educating the masses. For example this could be likened to the argument for sex education. While some conservative parents are worried it will drive their kids to have sex, the reality is that they are going to do it anyway. The difference is that sex ed programs – when done properly – teach kids how to protect themselves, and how their actions can impact others.

Leave a Reply