What about deepfakes?
While it’s not one of the risks this website concentrates on, the proliferation of deepfakes is one of the risks from AI.
Deepfakes are synthetic media (usually images, but sometimes video or audio) that have been digitally manipulated with AI tools.
A deepfake of Pope Francis in a Balenciaga-inspired white puffer coat.
The generation of synthetic media has been possible for years due to image editing software. The recent improvement in AI capabilities has transformed this task from a slow task, requiring familiarity with tools like Photoshop, to one that can be done cheaply with no special skill. While such manipulation was previously mostly restricted to images, AI tools also have made it possible to create audio and video deepfakes. As of 2024, some major categories of harmful deepfakes are propaganda, pornography, and impersonation.
Deepfakes can be generated for use in propaganda campaigns aimed at large audiences. A typical case would be a doctored image of a politician intended to make them look bad.1 There are concerns that such media could flood social media and become a major influence on political discourse.
Pornographic deepfakes2 are photorealistic media that depict people unclothed or in sexualized positions. The subjects are predominantly women3 and sometimes minors. They may be superstars, ordinary people undressed by people who know them4, or people who never existed.5 Such images are often generated by fine-tuned versions of open-weight AI models.
Finally, there are scams of impersonation. A person being impersonated could be a famous person that is made to support some shady product. Scams could also be more customized to their targets, such as your employer calling you with a request to transfer a sum of your company’s money, or a loved one with a desperate plea for financial help.
Conversely, the prevalence of deepfakes could lead to authentic pieces of media being labeled as fake, e.g., by people who find them inconvenient.
All of these cases present new challenges that will need to be addressed both at the societal and individual levels.
Admittedly, there are also uses of such software to make images that are photorealistic but clearly parody. For instance, in this humoristic but informative video, the voices of recent US presidents are cloned in a discussion about ranking AI alignment agendas. ↩︎
This category is sometimes called “non-consensual intimate deepfakes” or “non-consensual intimate imagery”. ↩︎
A 2019 report by Deeptrace found that 96% of deepfake videos were non-consensual pornography and that virtually all of these videos featured women as subjects. In contrast, non-pornographic deepfake videos on Youtube mostly depicted men. Another report from 2023 reported 98% of deepfake videos as pornographic and 99% of the subjects being women. ↩︎
This can also be an instance of revenge porn. ↩︎
In particular, the prevalence of generated child sex abuse material (CSAM) is overwhelming law enforcement departments that are responsible for such reports. ↩︎