How social media and tech fuel the far right, explained

Social media platforms not only provide a space for the far-right to gather but are also critical to helping them distribute their message.

People holding mobile phones are silhouetted against a backdrop projected with the Twitter logo in this illustration picture taken September 27, 2013.
Reuters Archive

People holding mobile phones are silhouetted against a backdrop projected with the Twitter logo in this illustration picture taken September 27, 2013.

Throughout the resurgence of the far-right in North America and Europe, many have accused social media networks and tech companies of fuelling the radicalisation.

Others point out that social media outlets like Facebook have been used to live-stream video footage of far-right attacks, like the deadly assault on two mosques in New Zealand in March, which killed 51 people.

As the United States headed towards the November 2016 presidential elections, Donald Trump’s candidacy helped embolden a resurgent far-right.

The alt-right – a loosely-knit coalition of white nationalists and neo-Nazis – rode Trump’s coattails, throwing its support behind his anti-immigrant and anti-Muslim policies, among others.

But the movement’s rise wouldn’t have been possible without its strong online presence, which some experts say traces its roots back to the 2014 Gamergate controversy.

That controversy was borne from issues of sexism and bigotry in videogame culture, and it spawned a long campaign of online harassment for critics.

Many of the people and tactics prominent during Gamergate became integral to the alt-right’s surge.

Although the alt-right has been marginalised in the wake of the deadly August 2017 Unite the Right rally in Virginia, far-right violence has continued to spread in the US.

The far-right has also experienced a resurgence in several European countries and elsewhere, such as Brazil, where Jair Bolsonaro became president in January 2019.

Many far-right groups and individuals have relied on money-sharing and crowd-funding apps for financial support, and activists have successfully pressured those apps to drop far-right groups and figures in recent years.

Around the world, research suggests, tech companies, social media outlets and anonymous online messaging forums have contributed to the growth of the far-right and the spread of misinformation.

YouTube as a ‘pipeline’

A new study from Cornell University found that YouTube offers a “pipeline” from comparably moderate right-wing content to overtly white nationalist videos and channels.

Analysing 331,849 videos on some 360 channels, the study found “strong evidence for radicalisation among YouTube users”, citing the way that users who consume extreme far-right content had previously consumed content affiliated with the so-called intellectual dark web and the alt-lite.

The alt-lite is a movement of civic nationalists, often pro-Trump, who eschew the hardline white nationalism of the alt-right.

People linked to the alt-lite, according to the study, “constantly flirt” with white supremacist ideas while avoiding the more overtly charged, racist and anti-Semitic rhetoric of the alt-right.

The intellectual dark web refers to a cluster of media personalities that often push anti-Muslim, anti-immigrant and anti-LGBTQ ideas under the guise of being contrarian.

The alt-right, the alt-lite and the intellectual dark web “sky-rocketed in terms of views, likes, videos published and comments, particularly, since 2015”, the study found, noting that the uptick “coincides” with the rise of Donald Trump to the US presidency after winning the November 2016 elections.

Although the study noted that YouTube did not recommend alt-right videos via content affiliated with the alt-lite and the intellectual dark web, it concluded that it was “possible to find alt-right content from recommended channels” when users view alt-lite and intellectual dark web content.

Referring to YouTube, the study concluded: “Our work resonates with the narrative that there is radicalisation pipeline.” It also acknowledged that there is ample room for future research on the subject.

At the time of publication, YouTube had not responded to TRT World’s request for comment on the study’s findings.

An ‘open’ platform

YouTube CEO Susan Wojcicki, addressing content creators in an open letter, recently vowed that it would remain an “open” platform.

“But openness comes with its challenges, which is why we also have Community Guidelines that we update on an ongoing basis,” Wojcicki wrote.

“Most recently, this includes our hate speech policy and our upcoming harassment policy. When you create a place designed to welcome many different voices, some will cross the line,” she continued, blaming harassment and hateful content on “bad actors”.

But YouTube has come under increasing fire in recent months.

Earlier this summer, the New York Times linked YouTube to the rise of the far-right in Brazil.

“Members of the nation’s newly empowered far-right—from grass-roots organisers to federal lawmakers—say their movement would not have risen so far, so fast, without YouTube’s recommendation engine,” the report noted, citing supporting findings by Brazil-based researchers.

The New York Times report came in the wake of widespread criticism against YouTube over a controversy involving far-right YouTuber Steven Crowder and former Vox journalist Carlos Maza.

After Maza criticised YouTube for allowing Crowder to broadcast homophobic and bigoted content—much of it targeted at Maza himself—the company ruled that Crowder had not violated its terms of service.

But the company did announce new measures aimed at removing explicitly white nationalist content and other bigoted videos and channels.

Despite allowing some controversial channels to continue unabated, YouTube has cracked down on some white nationalists and bigoted content creators.

James Allsup, who marched during the deadly Unite the Right rally in Charlottesville in 2017, had his account pulled last month after violating the terms of service, according to several reports.

But other bans targeting far-right channels were reversed shortly after they went into effect. Those include the channels of white nationalist website VDare, far-right Austrian activist Martin Sellner, and anonymous anti-immigrant British YouTuber “The Iconoclast”.

In June, the Southern Poverty Law Center said that the success of YouTube’s decision to remove more hateful content “depends on its ability to enact and enforce policies and procedures that will prevent this content from becoming a global organising tool for the radical right”.

“Tech companies must proactively tackle the problem of hateful content that is easily found on their platforms before it leads to more hate-inspired violence,” the statement concluded.

Anonymous messaging boards

In early August, a gunman shot dead 22 people and injured several more while opening fire on a Walmart in El Paso, Texas.

Moments before the attack, the alleged attacker’s apparent manifesto appeared on 8chan, an anonymous messaging board known for its far-right users.

In the manifesto, the alleged shooter cited a so-called “Hispanic invasion of Texas”.

It wasn’t 8chan’s first controversy.

In March, during the New Zealand mosque attacks in Christchurch—in which the shooter live-streamed video of the bloodshed—his manifesto appeared on 8chan.

On April 27, alleged attacker John Earnest posted an open letter on 8chan and attempted to live-stream an attack on the Chabad of Poway synagogue in California.

8chan is now offline and owner Jim Watkins has said that he has no plans to reinstate the forum, although he has defended the site against criticism in the wake of the deadly attacks.

“8ch.net is currently offline with no immediate plans to bring it back,” he said, as reported by ABS-CBN News.  

“The website was an anonymous social media platform where users could post text and images. Similar to Facebook or Twitter, some users abused the system by posting illegal content.”

‘Manipulating’ Twitter and Facebook

Far-right trolls have also been active on Twitter and Facebook.

In December 2017, Media Matters for America (MMFA) research found that far-right Twitter users had successfully “manipulated” the platform into “silencing” journalists and critics.

Conspiracy theorists like Mike Cernovich and Paul Joseph Watkins, who works for InfoWars, have targeted individuals for years-old jokes and drummed up online trolling campaigns.

In September 2018, the Counter Extremism Project (CEP) dug into dozens of neo-Nazi and far-right pages on Facebook.

After monitoring 40 pages for two months, CEP reported 35 for violating Facebook’s terms of service, such as hate speech content. Facebook only removed four of the pages, however.

Nonetheless, Facebook has attempted to crack down on hate speech and hate groups.

In March, the social media giant announced that it had extended its hate speech ban to include white nationalism.

The ban was applied to both Facebook and Instagram, the photo-sharing app also owned by the company.

The new restrictions targeted users and groups that “praise, support and representation of white nationalism and separatism”. 

Gab: Rise and fall  

As Facebook and Twitter cracked down on far-right, racist and abusive accounts, many users flocked to Gab, a self-proclaimed space for “free speech”.

In October 2018, Robert Bowers posted on Gab shortly before storming a synagogue in Pittsburgh, Pennsylvania, and shooting dead 11 Jewish worshippers.

“Screw your optics,” he wrote on the social media site. “I’m going in.”

Following the deadly rampage, Gab CEO Andrew Torba – known for supporting US President Donald Trump - attempted to defend the site, but it has since experienced a sharp decline, reports suggest.

In February 2019, Gab launched Dissenter, a browser extension that allows Gab users to comment on any webpage, among them news articles, without having to adhere to the website’s comment section rules.

Within a few months, both Firefox and Chrome had removed Dissenter from their add-ons.

Route 6