AI and deepfakes: The genie is out of the bottle

We have crossed a threshold into a moment where moving images are manipulated in ways that make them indistinguishable from reality.

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg / Photo: Getty Images
Getty Images

A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg / Photo: Getty Images

From Hollywood and Washington to social media timelines and the World Wide Web, deepfakes — a combination of ‘deep learning’ and ‘fake’ — and other synthetic media made with artificial intelligence (AI) have been proliferating quicker than any of us could have imagined.

In 2023 alone, at least 95,820 deepfake videos were found to have infiltrated the internet.

Sure, it may be cool and fun to watch Kendrick Lamar morph into Kanye, Will Smith, and Nipsey in the Pulitzer-prize-winning rapper’s 2022 single The Heart Part 5. You may even feel a sense of catharsis when replaying a rendered clip of Jon Snow apologising for the final season plot of Game of Thrones if you are among those (still) dissatisfied with how the iconic series ultimately ended.

But what happens when these instances of AI manipulation are applied for more nefarious purposes? Data and surveys indicate that most people cannot reliably detect deepfakes, with some admitting to watching videos they initially thought were real, only to find out later they were fake.

For the most part, the spotlight on the dangers that deepfakes pose typically falls on disinformation of the political variety. This technology, however, has been consistently used to harass and abuse women, including the likes of Taylor Swift, whose privilege and pop star status did not exclude her from being a victim of non-consensual pornographic material.

“Unfortunately, we should not be that surprised that deepfakes are used to degrade women,” assistant professor at Université Laval’s Department of Information and Communication Sophie Toupin tells TRT World.

After sexually explicit deepfakes of the Anti-Hero singer went viral on social media platforms in late January, Swift’s ardent fanbase, known as Swifties, sprung into action. Fans quickly swamped X with positive images of Swift and reported accounts that were sharing the deepfakes, causing the #ProtectTaylorSwift hashtag to trend.

Read More
Read More

'Taylor Swift' searches blocked on X after deepfake images go viral

The way Swifties fought back holds immense social and cultural importance, says Toupin, as it helps establish norms regarding the acceptable and non-acceptable use of AI-generated synthetic media — whether you are one of the world’s biggest pop stars or not.

“While this rapid response was immensely successful, it's important to acknowledge that not everyone has access to such robust community support,” Toupin says. “Taylor Swift's fan community served as a notable example of a feminist response that firmly asserted, ‘This is not acceptable.’

“I hope all women and girls who are subjected to such degrading practice are extended the same rapid response. In fact, at the community level, this type of solidarity is exactly what is needed to say no to misogynist and racist fake AI-generated content online,” Toupin adds.

In simplistic terms, deepfakes are a form of synthetic media that uses AI to swap a person’s likeness and facial appearance with that of another in videos.

It was initially propelled by a Reddit user who went by the moniker Deepfakes, who in 2017, had begun posting non-consensual and digitally altered adult content which featured celebrity faces superimposed on the bodies of women in adult movies.

Speaking to Vice’s Motherboard, the user said that they employed multiple open-source libraries to create the videos, including Google’s TensorFlow, which is free to use.

“On the technical front, it's crucial to bear in mind that the ease with which many people can create AI-generated deepfakes hinges on the available AI-generated software,” Toupin explains. “These software have been trained on vast amounts of content generated and shared by us, internet users, over the past two decades. This is an important issue to bear in mind.”

Following what happened to Swift, lawmakers and ordinary citizens demanded stronger protections against AI-created images, which, from Australia to Spain and everywhere in between, have been mushrooming at an alarming and quickening rate, while overwhelmingly harming women and children to boot.

Recently, Victorian MP Georgie Purcell called out a local news media for altering her body to make her bust appear bigger and partially removing parts of her clothing, calling it an example of the "ongoing, insidious" treatment of women in politics in the media.

According to one 2023 report, deepfake pornography makes up 98 percent of all deepfake videos online, and 99 percent of the individuals targeted in deepfake pornography are women, underscoring the need for a better regulatory framework.

“Let’s remember that if we consume synthetic media that has been done without the consent of the person portrayed, you are also part of the problem,” says Toupin. "It is important that you understand that you are taking pleasure in gender-based digital violence.”

As revolutionary as it is, the era of deepfakes is proving to create new challenges and fears for lawmakers and governments who have been scrambling to properly regulate this technology which holds the ability to make anyone appear to say and do anything at any given point in time.

The growing availability of disinformation and deepfakes, a 2022 European Union Agency for Law Enforcement Cooperation report states, will “have a profound impact on the way people perceive authority and information media,” besides undercutting trust in authorities and official facts.

“Experts fear this may lead to a situation where citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” according to the Europol report.

For a long time, people turned to photos and videos to supplement any piece of information they read before fully accepting something as fact; essentially, heeding the advice of ‘Don’t believe it until you see it’.

With the presidential election slated for November, the latest data say that Americans are “highly concerned” about the use of AI and deepfakes of candidates and political figures, with a strong majority of voters across party lines believing that the technology should be properly regulated.

Loading...

Their worry is not unfounded. Just last month in January, a fake robocall appearing to be an AI voice of President Joe Biden urged New Hampshire residents not to vote in the state’s Democratic primary election.

And in September 2023, Florida Governor Ron DeSantis was on the receiving end of a viral deepfake announcing he was dropping out of the 2024 presidential race, after having released an AI-manipulated video of former President Donald Trump and Anthony Fauci embracing a few months earlier.

"The tools and systems necessary to produce the stuff are readily available,” technology industry analyst Charles King of Pund-IT told Forbes. “Just as importantly, the current political climate in the US is so fractured and ugly that there are large and ready audiences of people on both sides ready to believe the worst about others."

In journalism and investigative work, audio and visual recordings, alongside photographs and text, are not only often regarded as reliable evidence, but also help journalists and investigators determine what is real and what is not.

Other forms of digital manipulation, such as Photoshop, existed but you would usually need a certain level of expertise to hide traces of editing. Plus, the results were generally not as sophisticated as those rendered by AI tech.

With AI in the picture, the prospect for false information exponentially grows, more so during times of conflict and war, says Dr Kalev Hannes Leetaru, founder of the GDELT Project, a realtime database of global events, language, and tone.

“There's a lot of potential right now for bad actors — whether it's [about] Gaza, whether it's [about] Ukraine or any other conflict — to use these tools to create hyper-personalised, at-scale falsehoods,” Leetaru, whose work revolves around leveraging advanced technologies and understanding the way they reshape global society, tells TRT World.

“Imagine where a social media [platform] or bad actor could look across all the population of the entire country, and then target every person there and give them something that they know, based on their history, is going to fire them up so much and really tear societies apart,” he explains, adding, “Or conversely, when legitimate documentation of war crimes emerges from a conflict zone.”

In 2022, a fake video showed Ukrainian President Volodymyr Zelenskyy telling his soldiers to lay down arms against Russia, but many correctly discerned signs of digital manipulation, including how his face seemed slightly out of sync with his head and an accent that sounded off.

The following year in June, several Russian media fell for a similar faked but more convincing video of Russia’s President Vladimir Putin. And in December, an AI deepfake of himself surprised the Russian president while he was at an annual news conference.

Israel’s war on Gaza, which has killed more than 27,800 Palestinians and wounded over 67,000 others since military attacks began on October 7, has also increased fears about AI’s power to mislead, the Associated Press reports.

Loading...

Right now, there are plenty of real imagery and accounts of carnage coming out of the besieged enclave, but they can appear alongside a medley of false claims and distorted truths. Earlier on in the war, AI-generated images, including one viral image of a baby crying amidst ruins of a bombing, floated on social media, while photos and videos from other conflict zones have been passed off as recent proof of what’s happening in Gaza.

Over time, AI has only continued to improve, and will continue to do so just as computer animation or editing software such as Photoshop did. Nearly anyone can create a persuasive fake by entering text into readily available AI generators like DALL-E or Midjourney to produce images, video or audio.

Experts, such as Leetaru, say the mere fact that deepfakes exist can confuse or lead people to cast doubt on real news or authentic imagery. “Now you're suddenly flooding the environment with known false information, so you can say, ‘Well, that imagery that is war crimes, that's just fake news like this other stuff.’”

Put simply, the ability to create compelling fake evidence is worrisome, and on top of that, it also allows people to dismiss real evidence, undermining trust in recorded images and videos as objective depictions of reality.

Germany’s Kühne Logistics University president and managing director Andreas Kaplan, who is also its professor of digital transformation, notes there is little doubt that AI-generated content, including deepfake videos and similar technologies, play a significant role in shaping narratives, influencing public opinion, and amplifying specific perspectives.

However, its impact on the spread of false information, he says, is a lot less powerful.

“To put it another way, generative AI primarily simplifies the supply of misinformation and disinformation, rather than the demand for it or its subsequent dissemination,” Kaplan tells TRT World, adding that the real issue emerges when people rely on AI tools for information, which “is not vastly different from accepting information on Wikipedia as fact or believing in robocalls received during a political campaign.”

Read More
Read More

Voice tool 'misused' as deepfakes flood web forum, AI company says

While he acknowledges regulatory bodies often lag behind the current state of affairs, Kaplan highlights that there are still efforts being made to address the abuse and misuse of AI-generated media content, including deepfakes. “Noteworthy is the European Union Commission's draft regulation on artificial intelligence, which mandates the labelling of all content created using deepfake technology.”

What Kaplan is referring to is the AI Act, for which a provisional agreement was reached on December 9, 2023. First proposed in April 2021, the agreed text will have to be formally adopted by Parliament and Council to become EU law. Once approved, the act will provide guidelines for the world’s first rules on AI.

“Similarly, the United States has recently intensified efforts to introduce legislation aimed at combating deepfakes, a move likely spurred by the approach of the presidential election this November.”

Canada is also studying a bill that would begin regulating some AI systems. And in the UK, the Online Safety Bill, which was passed in October 2023, aims to make the country “the safest place in the world to be online” through new laws that take a zero-tolerance approach to protecting children from online harm.

For the average social media user, according to Kaplan, determining whether something is trustworthy means being cautious with the material you consume, and critically evaluating it rather than accepting what’s presented at face value.

“It’s also important to recognize your own biases, as we often have a tendency to believe information that aligns with our preconceptions,” Kaplan says. “Being aware of this bias can aid in a more objective analysis of the information,” adds the professor whose research area includes advances in AI, digitalisation, and social networks.

“To summarise, the more sensational the news seems, the more critical it becomes to scrutinise its source.”

Route 6