Tech giants, like Google, have moved to counter extremist content in, but are they doing too little too late,
On Tuesday, the social media site Twitter temporarily suspended reporter and author David Neiwert over his cover image, which depicted the cover of Alt-America, his acclaimed book on the rise of the far-right in the United States.
Neiwert has covered the US far-right for decades and authored several books, and the suspension highlights the ongoing blunders of the tech industry’s efforts to clamp down on extremist and bigoted content.
Last week, the video-sharing website YouTube announced a purge of videos and channels promoting white supremacy, racism and conspiracy theories. With many on the far-right dependent on digital platforms to spread their worldview, the move prompted derision and outrage.
But the purge also swept up several independent journalists, historical researchers and others, seeing many of their accounts suspended or de-monetised over the content in their videos.
Shane Burley, journalist and author of Fascism Today, recently had his Facebook account suspended for three days after posting an image of David Duke, former leader of the Knights of the Ku Klux Klan, leading anti-migrant patrols on the US-Mexico border.
“They’re [tech companies] so horrendous at doing this,” Burley told TRT World.
“The reason is that they think they can use an algorithm-based model without having human beings figure out what the problem is and without having any accountability to the people who are trying to change our standards.”
Is banning controversial figures from social media a form of monitoring hate speech or censoring free speech? pic.twitter.com/zbUaF1xfTd— TRT World (@trtworld) May 6, 2019
On a broader level, banning or restricting social media accounts, money-transfer websites and other online platforms has taken a devastating toll on far-right groups and individuals, pushing many of them into the less accessible crevices of the internet.
“But now YouTube is taking down really important videos about World War Two,” he explained. “I think right now what we're seeing is that in this big flurry of what people think is tech accountability, they're not equipped to deal with a lot of these problems.”
The video-sharing website’s decision followed public outcry over right-wing YouTube video host Steven Crowder’s lengthy campaign of harassment targeting a gay video journalist, Carlos Maza of news site Vox.
For two years, according to Vox, Crowder berated Maza with homophobic and racist slurs, at one point calling him a “lispy qu**r”.
After Maza drew widespread attention to the issue in early June, YouTube announced its plan to crack down extremist content, such as Holocaust denial, conspiracy theories and white supremacy promotion.
But despite Crowder’s doubling down and his supporters issuing death threats against Maza, YouTube only dealt Crowder a temporary suspension of his monetisation.
“As an open platform, we sometimes host opinions and views that many, ourselves included, may find offensive,” the company said in a statement last week.
Hate speech and calls for violence against Palestinians are posted on Israeli social media every 71 seconds pic.twitter.com/krZZaaZXq1— TRT World (@trtworld) March 15, 2018
“These could include edgy stand-up comedy routines, a chart-topping song, or a charged political rant – and more. Short moments from these videos spliced together paint a troubling picture. But, individually, they don’t always cross the line.”
The statement went on to insist that it would continue to examine and strengthen its policies protecting people from hate speech and harassment.
In a tweet posted on Monday, Maza said: “[YouTube] loves gay people when we’re dancing in their Pride videos or feuding with each other over makeup.
“But if we want to talk about politics or philosophy or anything interesting, YouTube says we should expect to be harassed … That’s not allyship. That’s exploitation.”
According to a recent survey by the Anti-Defamation League (ADL), an estimated 37 percent of Americans “experienced severe online hate and harassment” last year, and 17 percent endured hate and harassment on YouTube.
“Online hate and extremism pose a significant threat – weaponising bigotry against marginalised communities, silencing voices through intimidation and acting as recruiting tools for hateful, fringe groups,” said Jonathan Greenblatt, National Director of ADL, in a statement.
At the time of publication, YouTube had not responded to TRT World’s request for a comment.
The Southern Poverty Law Center (SPLC), an Alabama-based hate monitor, documented at least 81 killings by men radicalised on the internet between 2014 and 2018.
In an open letter to YouTube CEO Susan Wojcicki, Vox’s Editor-in-Chief Lauren Williams, and Head of Video Joe Posner, said the video-sharing website “has made it easier than ever for people making abusive content to reach a massive scale”.
Twitter suspends accounts of far-right Britain First members. Is Twitter finally fighting hate speech? pic.twitter.com/8cuGunMN0f— TRT World (@trtworld) December 20, 2017
“The dangerous backlash against creators who dare to speak out against abuse is all the more explosive when your rules are confusing and applied inconsistently and without transparency,” the pair said in the letter.
Facebook, Instagram and others have also taken steps towards cracking down on far-right and other extremist content.
In May, after Facebook banned conspiracy theorist Alex Jones, who hosts website InfoWars, US President Donald Trump tweeted implicit support of the divisive commentator and others.
At the time, Trump promised to “monitor the censorship of AMERICAN CITIZENS on social media platforms”.
For his part, Burley noted that the rise of the alt-right – a broad coalition of far-right groups, including white nationalists and neo-Nazis – was partially made possible through online proselytising.
“If you have a popular Twitter account, for instance, you’re on the same platform as politicians, and that's a lot different than if you have your own website,” he said.