A deepfake is a form of synthetic media created with the benefit of deep-learning technology, whose primary use is the imitation/mimicry of individuals. These videos often consist of face-swap/lip-synch videos which involve the appropriation of an individual’s identity to make them say or do things they’ve never said or done. In an early notorious example, the filmmaker Jordan Peele produced a deepfake of then-President Barrack Obama, with the overall message that political deepfake technologies were not only possible but will actively harm truth. In the five years since the emergence of deepfakes, academics, politicians and legal scholars have expressed similar concerns around the capacity of deepfakes to erode truth and spread dis/misinformation. What we are interested is in the harms of deepfake to true pieces of media and how this evidences a growing distrust both in video as a medium and towards journalism, society, and organizations. This work draws from the overlap between two PhD research projects, a LERO/SFI project on deepfakes and an IRC project on conspiracy interventions. We draw from two different pieces of research from both these projects: a review of current conspiracy interventions and qualitative research on the use of deepfakes in misinformation. We show how deepfakes have already caused the erosion of truth in online spaces and the potentials of deepfake-fueled conspiracy theories. We then tie this into potential interventions for conspiracy theories and their efficacy. We theorize what a future deepfake conspiracy intervention may look like and provide a number of recommendations for best practice.

Summit 2023
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.