Deepfakes use machine learning to fabricate events that never happened. While amusing and creative, there are pressing concerns about the social and political implications of this rapidly evolving technology.
Deepfakes have started to appear everywhere – from viral celebrity face swaps to impersonations of political leaders.
Millions got their first taste of the technology when they saw former US president Barack Obama using an expletive to describe then-president Donald Trump, or actor Bill Hader shape shifting on a late-night talk show.
Earlier this week, social media went into a frenzy after deepfakes surfaced of actor Tom Cruise in a series of TikTok videos that appear to show him doing a magic trick and playing golf, all with a smoothness that was unsettlingly realistic.
Here’s the crazy thing about this Tom Cruise deepfake....— Yashar Ali 🐘 (@yashar) February 26, 2021
This isn’t even a super high quality deepfake and I’m willing to bet that it could fool most people.
Now imagine the quality of deepfake a government agency could produce.https://t.co/wMFMarEtAi pic.twitter.com/CjTxnNv2XI
What are deepfakes?
A deepfake is an artificial intelligence (AI) generated video or audio clip of a real person doing or saying fictional things. A computer uses “deep learning” algorithms to learn the movements or sounds of two different recordings and combine them and produce realistic-looking fake media.
There are several methods to create deepfakes.
The most common relies on the use of deep neural networks involving autoencoders that employ a face-swapping technique. The autoencoder program is tasked with studying video clips to understand what a person looks like from a variety of angles and environmental conditions, and then maps that person onto an individual in a target video by finding common features.
Another form of machine learning used is known as Generative Adversarial Networks (GANs), which rely on studying large amounts of data to detect and improve on any flaws in the deepfake over multiple rounds, making it even tougher to be decoded.
The technology itself is nothing new. It’s part of how major Hollywood network studios include actors in roles after they’ve died and a major part of how 3D movies are made. It’s also how gaming companies let players control their favourite athletes.
What is new is that the process has become cheaper and widely accessible.
The amount of deepfake content is growing at an alarming rate: DeepTrace technologies counted 7,964 videos deepfake videos online at the start of 2019, and by the end of the year it had nearly doubled to 14,678.
Since deepfakes emerged in late 2017, several apps and software generating deepfakes such as DeepFace lab, FakeApp and Face Swap have become readily available and the pace of innovation has only accelerated.
And companies have moved swiftly to monetise it.
In 2019, Amazon announced that Alexa devices could speak with the voices of celebrities. On Instagram, deepfake videos of virtual artists backed by Silicon Valley money can bring in millions of followers and revenue without paying talent to perform. And in China, a government-backed outlet introduced a virtual news anchor that would “tirelessly” work 24 hours around the clock.
Increasingly, startups are attempting to commercialise deepfakes by licensing the technology to social media and gaming firms.
But the potential misuses for deepfakes are high. There is concern that in the wrong hands, the technology can pose a national security threat by creating fake news and misleading, counterfeit videos.
The Brookings Institution summed up the range of political and social dangers that deepfakes pose, from: “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”
At the moment, the most pressing concern has been the hijacking of women’s faces in ‘revenge porn’ videos. In fact, according to a DeepTrace report, pornography makes up an astounding 96 percent of deepfake videos found online.
Digital impressions are starting to have financial repercussions too. In the US, an audio deepfake of a CEO reportedly scammed one company of $10 million, and in the UK an energy firm was duped into a fraudulent transfer of $243 million.
One of the solutions to combat the incredible growth of deepfakes, has been to turn to AI itself.
Sensity, a visual threat intelligence platform that applies deep learning for monitoring and detecting deepfakes, has created a detection platform that monitor over 500 sources where the likelihood of finding malicious deepfakes is high.
Detection of #deepfakes and GAN-generated faces is live on Sensity! 🔍https://t.co/ri5dhp1bXM— Sensity (@sensityai) February 8, 2021
We are making Sensity detection technology accessible and as intuitive as possible: drag & drop multiple files and get a summary result in a few seconds. pic.twitter.com/97S53TMfOS
Beyond technological remedies, there is growing appetite to seek legislative redress to combat the dissemination of deepfakes.
California enacted a law in 2019 that made it illegal to create or distribute deepfakes of politicians within 60 days of an election. But enforcing bans is easier said than done, given the anonymity of the internet.
Other legal avenues could take the form of defamation and the right of publicity, but their broad applicability might limit its impact.
In the short-term, responsibility will have to fall on the shoulders of social media giants like Facebook and Twitter.
As legal scholars Bobby Chesney and Danielle Citron argue, tech platforms’ terms-of-service agreements are “the single most important documents governing digital speech in today’s world,” making those companies’ content policies the most “salient response mechanism of all” to our rapidly advancing deepfake dystopia.