Deepfake is generative media whereby machine learning and AI is utilised to manipulate or create visual or audio content. It’s most commonly used to superimpose someone else’s image and video to that of another person’s will. This article discusses the ethics of deepfake implementation.
Deepfakes facilitate the creation of virtual companions that can potentially combat the loneliness epidemic seen worldwide. Loneliness will soon outstrip obesity as a public health crisis with 61% of Americans reporting that they are lonely. It can be argued that the joy of human companionship cannot be easily replicated as humans genuinely choose to spend time with you. Additionally, if the reciprocation of feelings aren’t considered genuine then loneliness could be exacerbated instead of alleviated. Deepfake technology also enables individuals to relive a moment with a deceased relative/friend through replicating their voice or image, allowing a seemingly real conversation to take place. However, is this solution to death only just extending the grieving process or in some cases even preventing this natural human behavioural process from taking place.
From a care ethics perspective, promoting the well-being of care-receivers is fundamental, implying that the action of providing a virtual companion is in itself ethical as its intention is to care for those who are vulnerable to feelings of loneliness or grief. Furthermore, from a utilitarian standpoint any companion is possibly better than no companion at all. However, virtue ethics is concerned with the virtuosity of the actor implementing the technology. From this ethics viewpoint, the implementation is subject to commercialisation which undermines the morality of its usage, since motivation by profit is not considered virtuous. Duty-based ethics judge right and wrong based on the morality of the motivation of the action. According to research, seven cross-cultural moral rules have been founded. Two of them are ‘help your group and family’. Based on these moral rules, using deepfake technology to create virtual companions is ethical since it facilitates the support of comforting family members not only through loneliness but also grief.
Deepfake can increase the accuracy and applicability of AI technology. To use AI effectively, the amount of input data is important. In some areas, it is difficult to collect a large amount of image data for instance brain cancer scans due to the small sample size of cases. To overcome this limitation, companies have collaborated to create fake brain MRI scans. Combining 1:9 real to fake data in AI learning provides an almost equivalent result to 100% real data. Following aConsequentialist view, deepfake increases the efficiency in which AI can be used to analyse medical scans, contributing to the net well-being of patients through more reliable diagnosis. Duty-free ethics also supports this application because the intent for using deepfake within the medical field is to save lives which also aligns with care ethics.
Deepfake as of recent is no stranger in the pornographic film industry. Some examples of deepfake pornography have been used on celebrities such as Gal Gadot and Emma Watson. Hedonism, which is the maximization of one’s pleasure, should in theory justify its means. With consent, no harm is inflicted towards any party regarding their privacy, reputation or personal rights. This agrees with the Freedom Principle, stating that people have the right to pursue their own source of pleasure provided that they do not inhibit the pleasure of others.
We can argue deepfakes could be considered as sexual fantasies, which is no more than a virtual image. Much like a sexual fantasy, the materiality of deepfake pornography content and to answer whether it is permissible or not could be debated. Henceforth, does not necessarily lead us to a black and white answer on whether deepfakes, when used as entertainment value, are ethical or unethical.
Moreover, deepfake pornography could increase the crime of sextortion. Non-consensual content created to blackmail, humiliate or harm is an exploit of deepfake pornography. From a study, about 96% of deepfake videos are pornographic and many have been used to victimize women, such as the Bella Thorne case. A paper discusses critically on a societal impact that deepfakes favors men more than it favors women which is against egalitarianism; the priority of equality in society. Deepfakes condemn women to be treated as sexual objects more so than men.
To Trump or Be Trumped
Deepfake can potentially have a profound impact on the political process of liberal democracies. In 2018, a deep fake video surfaced of former US president Barack Obama, demonstrating the ease at which a deepfake can be used to manipulate viewers through disseminating false information. This can impact democratic processes such as electoral campaigns, pose threats to national security by prompting militaries to act on bad information and therefore in a utilitarian view, is considered unethical due to the far reaching global consequences. Deontologists also warn that deepfake used in political campaigns is a precarious decision because the right and wrong over-depend on the individual motivation for using it. This was exemplified during the emergence of Naziism and Japanese imperialism, where utilitarianism idea polarised into totalitarianism.
Age of Deceit
Deepfakes only serve to undermine the trust in information and journalism, potentially leading to an age where humanity can no longer determine the credibility in a medium’s content, contributing to the fragmentation of our public discourse. However, it can be argued that the problem is a technical one, whereby most deepfake research is already seeking methods to detect its use. A notable example is Deepfake Detection Challenge, a coalition of leading tech companies currently seeking innovative new technologies that can help detect deepfakes. On the contrary, it can be argued that detection techniques are creating a “Moving Goal Post” whereby with new detection algorithms, new methods of avoiding detection arise.
Despite the negatives, creating deepfakes is a freedom of speech which is inherent to the Principle of Individual Freedom. By censoring the ‘misuse’ of deepfakes it is threatening our fundamental right to communicate an opinion through voice or video, regardless of its intent.
We are for this technology.