A recent report from the Center for Countering Digital Hate has found that social media companies are endangering Muslim communities by normalising abusive behavior online.
Social media companies are failing to act on 89 percent of posts containing anti-Muslim content reported to them, according to a recent report.
“This report exposed that social media companies, including Facebook, Instagram, TikTok, Twitter, and YouTube, failed to act on 89 percent of posts containing anti-Muslim hatred and Islamophobic content reported to them,” said the Center for Countering Digital Hate (CCDH).
In a joint statement in 2019, Meta, Twitter, and Google committed to uphold the Christchurch call to eliminate terrorist and violent extremist content online.
The social media giants stated that they would be resolute in their “commitment to ensure they are doing all they can to fight the hatred and extremism that lead to terrorist violence.”
“Once again, their press releases prove to be nothing more than empty promises,” the report said.
READ MORE: Tracing the roots of modern-day Islamophobia
Our new research reveals: A Failure to Protect.— Center for Countering Digital Hate (@CCDHate) April 28, 2022
👉23 groups dedicated to anti-Muslim hatred
👉530 posts with 25M views
👉Platforms took no action on 89% of the content reported
It’s time to legislate, moderate and remove the hate. pic.twitter.com/lmKWgNTOZg
'Hate is good business'
The CCDH researchers reported 530 posts which contain disturbing, bigoted, and dehumanising content that target Muslims through racist caricatures, conspiracies, and false claims.
These posts were viewed at least 25 million times.
Many of the abusive content was easily identifiable, and yet there was still inaction, it said.
Stating that Instagram, TikTok and Twitter allow users to use hashtags such as #deathtoislam, #islamiscancer and #raghead, the report further said content spread using the hashtags received at least 1.3 million impressions.
Such content further endangers these communities by driving “social divisions, normalising the abusive behaviour, and encouraging offline attacks and abuse,” it added.
“Worse still, platforms profit from this hate, gleefully monetising content, interactions, and the resulting attention and eyeballs. For them, hate is good business,” it said.