What are the problems with deepfake porn?

"Creating fake erotic images is not inherently bad..." says Sophie Maddocks. "However, when fake nude images of people are created and distributed without their consent, it becomes deeply harmful." (Credit: Getty Images)

“Deepfake porn” is on the rise. Here’s how it harms the people it purports to depict.

With rapid advances in AI, the public is increasingly aware that what you see on your screen may not be real. ChatGPT will write everything from a school essay to a silly poem. DALL-E can create images of people and places that don’t exist. Stable Diffusion or Midjourney can create a fake beer commercial—or even a pornographic video with the faces of real people who have never met.

So-called “deepfake porn” is becoming increasingly common, with deepfake creators taking paid requests for porn featuring a person of the buyer’s choice and a plethora of fake not-safe-for-work videos floating around sites dedicated to deepfakes.

The livestreaming site Twitch recently released a statement against deepfake porn after a slew of deepfakes targeting popular female Twitch streamers began to circulate. Last month, the FBI issued a warning about “online sextortion scams,” in which scammers use content from a victim’s social media to create deepfakes and then demand payment in order to not share them.

Sophie Maddocks, a doctoral student in the University of Pennsylvania’s Annenberg School for Communication, studies image-based sexual abuse, like leaked nude photos and videos and AI-generated porn.

Here, Maddocks talks about the rise of deepfake porn, who is being targeted, and how governments and companies are (or are not) addressing it:

Q

What is deepfake porn? How popular is it?

A

Deepfakes are visual content created using AI technology, which anyone can access through apps and websites. The technology can use deep learning algorithms that are trained to remove clothes from images of women and replace them with images of naked body parts. Although they could also “strip” men, these algorithms are typically trained on images of women.

Deepfakes really caught the public’s attention in 2017, and two years later, in 2019, there were 14,678 deepfake videos online and 96% were pornographic. At the time all of these featured women, according to research from Deeptrace Labs.

The rise of AI porn adds another layer of complexity to this, where new technologies like Stable Diffusion create fake porn images. These synthetic sexual images are AI-generated, in that they are not depicting real events, but they are trained on images of real people, many of which are shared non-consensually. In online spaces, it is difficult to disentangle consensually from non-consensually distributed images.

Creating fake erotic images is not inherently bad; online spaces can be a great way to explore and enjoy your sexuality. However, when fake nude images of people are created and distributed without their consent, it becomes deeply harmful.

Q

What level of technical knowledge is required to make these images?

A

Anyone can create their own deepfake porn images, regardless of their skill level, using websites with deepfake generators.

Q

The media often uses the phrase “revenge porn.” Are these terms interchangeable?

A

“Revenge porn” is defined as the non-consensual creation or distribution of explicit images. Although journalists often use that term, it is universally rejected by survivors, activists, and other experts. They prefer terms such as “image-based sexual abuse.” When it is produced without the consent of the person featured, deep fake porn is an example of image-based sexual abuse.

Q

Are the majority of deepfakes made by people who know the person whose likeness they are using?

A

Many examples of deepfake porn target celebrities and women with high public profiles, presumably by people who do not know them personally. We can see from the people they target, that these pornographic deepfakes often seek to silence and shame women by spreading disinformation about them.

Q

Are there any patterns in the demographics of who is targeted?

A

Broadly speaking, minoritized women and femmes are more likely to experience image-based sexual abuse, as are single people and adolescents. LGBTQ populations are also at increased risk of harassment. More research is needed to understand how this harm affects other minority groups, including trans people and sex workers, who anecdotally appear to be at increased risk.

Q

Have any artificial intelligence companies addressed their role in deepfake creation?

A

Through an update that attempts to censor nudity, AI text-to-image model, Stable Diffusion, has made generating AI porn more difficult. In 2018, Reddit banned certain groups’ not-safe-for-work (NSFW) use of AI to generate porn.

Q

Are there any laws regarding fake porn in the United States (or elsewhere)?

A

In the UK, a law has been passed that covers sharing (not creating) deepfake porn. In the US, only four states have deepfake laws—New York, Virginia, Georgia, and California. Some laws addressing image-based sexual abuse are expansive enough to include deepfake porn.

Q

Are researchers or activists proposing ways to combat deepfake porn?

A

It is difficult to envisage solutions that address deepfake porn without challenging the broader cultural norms that fetishize women’s non-consent. The rise of misogyny online, through which some men perceive themselves to be victims of increasing efforts toward gender equality, creates the conditions for deepfake porn to proliferate as a form of punishment targeted towards women who speak out.

Q

How do you see your research on this topic evolving in the future?

A

I’m increasingly concerned with how the threat of being “exposed” through image-based sexual abuse is impacting adolescent girls’ and femmes’ daily interactions online. I am eager to understand the impacts of the near constant state of potential exposure that many adolescents find themselves in.

Source: Penn