Deepfakes and cheap fakes: the biggest threat is not what you think

Almost all current deepfake usage targets and abuses women. Why isn’t this part of the policy discourse?

2020 has been called “the year deepfakes went mainstream” as its use rocketed from the shadowy corners of the internet just a few years ago, to its increasingly widespread use in ads, tv shows and even identity protection in documentaries. 

In 2018, a viral video depicting Barack Obama calling Donald Trump an expletive he never actually uttered demonstrated the potential of such technology. Suddenly academics, journalists, pundits, and politicians alike warned of the looming threats of deepfakes to democracy. 

Think tanks published reports on the potentially damaging effects of such AI, showing how it could be used to distort truth and democratic discourse, weaken journalism, weaponise disinformation, inflict permanent damage on “prominent” individuals, damage businesses through white collar cybercrime, and inflame social tensions and divisions. 

Missing, however, from mainstream policy discourse is the very real threat to almost 4 billion people in this world: women. 

A 2019 report found that 96 percent of deepfake videos online were pornographic, and exclusively targeted women. 

To repeat: almost all deepfake videos target and abuse women. 

So while deepfakes could potentially be used to manipulate elections or spread propaganda, its current primary usage is absent from the popular discourse. Why isn’t there more discussion surrounding this, and what can be done?

What are deepfakes?

Deepfake, a portmanteau of “deep learning” and “fake” is the video, audio, and text equivalent of photoshop. Generated by deep learning artificial intelligence, it uses different methods, including inputs of existing images, videos, and sounds to create simulations - of varying quality - of people, voices, and actions. 

A related term - “cheap fakes” - refers to video manipulation created by software that is cheaper and more accessible, and utilises methods like photoshopping, speeding and slowing videos, lookalikes, and recontextualising of AV material. Both methods are becoming increasingly easy to use and widespread: a report by Sensity, a company that tracks and investigates online deepfake usage, shows that the number of deepfakes is growing at an exponential rate, and the number of creators and sources is rising, too. 

Other

Deepfakes were initially very complex and difficult to make, but as AIs become more sophisticated, it’s becoming easier to yield a convincing result. Deepfakes (and cheap fakes) are also increasingly easy to access and create, whether through step-by-step guides on the internet, online portals, or individuals offering their services to potential buyers.

Deepfakes’ explosion in the popular discourse came after Vice journalist Samantha Cole uncovered a Reddit forum (that has since been closed) where user u/deepfakes used deep learning to face swap female celebrities into pornographic videos. 

Giorgio Patrini, CEO of Sensity, says that most of the deepfakes online are made with the same few methods and the same few open-source AI. “The reason is they are very easy to use and they are very well-maintained and known by the communities,” he told Discover Magazine. 

When public becomes private

Earlier deepfakes focused on female celebrities, but there has been a rise in deepfakes using images of influencers as well as private pictures of individuals taken from social media--or even secretly snapped photographs of women. 

A recent investigation found an AI-powered Telegram bot that allows users to digitally “strip” clothed women. According to the report, the vast majority of targets were private individuals, and most images appeared to be taken from social media accounts of the targets, or other private channels “with the individuals likely unaware that they had been targeted.” The bot remains on Telegram, and anywhere from 104,000 to 700,000 women, and children, have been targeted as of October 2020.

,,

In other words, the proliferation of these tools means that any woman who has a regular image or video posted online either by themselves or by another person, ones that were never sexually explicit, could potentially be the target of ‘revenge porn’, a type of  image-based abuse that includes the “nonconsensual sharing and creation of sexual images” through deepfakes.

Since deepfakes can be mistaken for real images or videos, its potential social, professional, and personal ramifications, in addition to its more intangible effects are very real. A 2019 report from the UK Council of Internet Safety showed that victims of revenge porn suffered psychological and emotional harm; were subjected to online and offline harassment; had mental health problems like panic attacks, PTSD, depression, suicidal ideation; harms to professional reputation; and felt that their personal and bodily integrity had been violated. “The harms caused by revenge porn may be similar to those in other sexual crimes,” the report says. 

Deepfake images and footage have already been used in blackmail, extortion, humiliation and abuse for politicians, journalists, and many other private individuals. 

The fact that the images are “not real” does not mean the damage inflicted on victims is any less real.

“They don’t consider it a problem”

The primary focus of governments and media companies remains on being able to detect deepface videos due to potential threats to the public from disinformation campaigns. The Pentagon’s Defense Advanced Research Project Agency (DARPA) is funding research on deepfakes, and Twitter, Facebook, and YouTube have all updated their policies in recent years to include deepfakes. Facebook, for instance, launched a “deepfake detection” competition in 2019.

These are all, of course, important. Disinformation poses real and potential threats--just look at how a perceived deepfake video of Gabon’s president sparked an attempted military coup in 2019.

However, detection of deepfakes is only one piece of the puzzle, and does not even offer a final solution. 

Reuters

“[I]ndividual women, journalists, and others who are antagonistic to those who hold economic and political power are going to be the first to confront the politics of evidence in a ‘post-truth’ world,” say Britt Paris and Joan Donovan in the report, "Deep Fakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence".

Moreover, the glaring issue of protecting women, who are the primary target of deepfakes, is virtually non-existent in these conversations. 

For instance, proposed policies on deepfakes have been dismissed by politicians, who suggest that they are not as harmful as “real” ones, despite testimonials from victims. “The law wasn't, and still isn't, prepared to handle content like revenge porn and misuse of non-famous people's images,” wrote journalist Cole. 

Pornography sites also refuse to take down thousands of deepfake videos. “The attitude of these websites is that they don't really consider this a problem,” Patrini told Wired. “Until there is a strong reason for [porn websites] to try to take them down and to filter them, I strongly believe nothing is going to happen.”

Ultimately, technological solutions can only offer so much on their own. They have to be combined with policy and social solutions, including addressing structural inequality, media literacy, and critical thinking to offer a comprehensive approach. 

“The Department of Defense can’t save us. Technology won’t save us,” writes Cole. “Being more critically-thinking humans might save us, but that’s a system that’s lot harder to debug than an AI algorithm.”

Route 6