How tech companies fail to combat white supremacy

Big tech is reluctant to regulate hate speech because it is bad for business. White supremacist accounts have also gotten sneakier at avoiding detection.

Thousands of people with placards and banners are seen marching against racism to mark UN World against Racism global day of action. (March 16, 2019)
Getty Images

Thousands of people with placards and banners are seen marching against racism to mark UN World against Racism global day of action. (March 16, 2019)

When 28-year-old Brenton Tarrant attacked a Christchurch mosque, tech companies fumbled to take down the white supremacist content he had uploaded on to various social media platforms. In this internet age, it was an attack meant to go viral. 

Big tech companies struggled to stem the flow of white supremacist rhetoric although they had plenty of success shutting down Daesh propaganda videos, and suspending accounts belonging to militants.

But while tech companies clamped down on Daesh-held accounts, the past few years have witnessed a burgeoning social media presence for white supremacists. A 2016 report found that American white nationalist movements have experienced a growth in followers of more than 600 percent since 2012. 

“Today, they outperform ISIS [Daesh] in nearly every social metric, from follower counts to tweets per day,” JM Berger said in the report. 

Vice’s Motherboard reported in 2018 that “while YouTube has cracked down on pro-ISIS [Daesh] material, the video giant leaves neo-Nazi propaganda online for months and sometimes years at a time.”

And the implications are making their mark offline, too. 

The New York-based Anti-Defamation League in January said domestic extremists killed at least 50 people in the US in 2018, up from 37 in 2017, noting that "white supremacists were responsible for the great majority of the killings, which is typically the case".

Both data and the many experts who track violent extremists point to white nationalism as a rising threat in the US and abroad.

White supremacist propaganda efforts nearly tripled last year from 2017, the Anti-Defamation League said. 

Reuters

A still image taken from video circulated on social media, apparently taken by a gunman and posted online live as the attack unfolded, shows him entering a mosque in Christchurch, New Zealand, on March 15, 2019.

Psychological warfare

The Christchurch attacks illustrate how white supremacist rhetoric has changed. Tarrant’s 74-page manifesto is thick with memes that sought to throw off individuals lesser versed in ‘dark web’ parlance. Racism is harder to identify when sheathed in memes. 

Modern white supremacist culture has roots in popular memes - rhetoric is ultimately more shareable that way. Popular memes with no political sentiment have also been co-opted by the alt-right to suit their narratives. Pepe the Frog, once a Matt Furie cartoon character, is now a favourite alt right symbol

Or take the phrase “subscribe to pewdiepie”, a call to action to subscribe to the most popular channel on Youtube run by Felix Kjellberg. It was a seemingly innocuous phrase until one takes stock of the anti-semitic and racist controversies that Kjellberg is continuously embroiled in. 

In a recent stunt, Kjellberg paid two people in India to hold up a sign that read ‘death to all Jews’ and then dismissed criticism, saying the video was made in jest. Now, after Tarrant’s use of the phrase as he began his assault, the influential YouTuber is once again in the spotlight

“Because these communities have so successfully adopted irony as a cloaking device for promoting extremism, outsiders are left confused as to what is a real threat and what’s just trolling,” Taylor Lorenz wrote for The Atlantic

In other words, as the anti-semitic YouTube channel E;R so eloquently put it: “Pretend to joke about it until the punchline /really/ lands.”

Getty Images

Andrew Knight holds a sign of Pepe the frog, an alt right icon, during a rally in Berkeley, California. (April 27, 2017)

Getting sneakier

White supremacist accounts have also become niftier at springing back when media is removed or accounts suspended. A Vox article described removing a video from the internet “was like playing a game of whack-a-mole”. Users on YouTube amended parts the video by tacking on watermarks or removing the audio to avoid technology that would otherwise detect copies. 

White supremacists have also become sly when it comes to branding. A report found that influencers on YouTube constructed a face that was designed to emphasise relatability and authenticity. These influencers also used search engine optimisation by using neutral or liberal key phrases while introducing extremist speakers who offer alternative worldviews. 

YouTube’s algorithm certainly does not make things easier. It's been termed the “the great radicaliser” due to its tendency to push progressively extremist content the more a user lets its autoplay feature runs. And while YouTube’s autoplay feature may aim to keep users on the site for as long as possible, with roughly 400 hours of footage uploaded per minute, it becomes harder to moderate. This has alarming consequences. 

"Algorithms can either foster groupthink and reinforcement or they can drive discussion," Bill Braniff, the director of the National Consortium for the Study of Terrorism and Responses to Terrorism (START) told CNN.

"Right now the tailored content tends to be, 'I think you're going to like more of the same,' and unfortunately that's an ideal scenario for not just violent extremism but polarisation,” he continued. 

An op-ed in the New York Times explains: “YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.” 

Getty Images

Reddit boasts 330 million monthly users.

Almost unregulated space

Modern white nationalism found a home on the internet through Stormfront, an internet forum that is the web’s first hate site. The website’s founders have reported going broke, but that has not prevented white supremacists from posting their unchecked opinions on websites like Reddit, 4chan and Twitter. 

Reddit’s chief executive Steve Huffman has controversially said that racism is permitted on the social networking site. Huffman clarified that Reddit is an online forum that self-regulates and that the site takes action when “users’ actions conflict with our content policies”, but this makes moderation relative and can foster hate on its own terms. 

Reddit was, and still is in some ways, a hotbed of white supremacy and misogyny with subreddits like /r/altright, /r/Imgoingtohellforthis, /r/Incels. This eased a little after the implementation of anti-harassment measures in 2015. 

Companies are also reluctant to cut down too much on speech. As mentioned earlier, the growth of white nationalist accounts is rising exponentially. Cutting down on their speech is tantamount to cutting profit. 

“The company’s sporadic, impartial effort to systematically deal with white supremacists (and other harassers, including Trump) is revealing. It’s rooted in Twitter’s decision to prioritise driving traffic and its investors’ returns over everything else,” wrote Jessie Daniels for Dame Magazine in an article starkly called Twitter and White Supremacy, A love story.

Route 6