Facebook trains AI on 'hateful memes'

The leading social network said it had already created a database of 10,000 memes — images often blended with text to deliver a specific message — as part of a ramped-up effort against hate speech.

A Facebook logo is displayed on a smartphone in this illustration taken on January 6, 2020.
Reuters

A Facebook logo is displayed on a smartphone in this illustration taken on January 6, 2020.

Facebook unveiled an initiative on Tuesday to take on "hateful memes" by using artificial intelligence, backed by crowdsourcing, to identify maliciously motivated posts.

The leading social network said it had already created a database of 10,000 memes — images often blended with text to deliver a specific message — as part of a ramped-up effort against hate speech.

Facebook said it was releasing the database to researchers as part of a "hateful memes challenge" to develop improved algorithms to detect hate-driven visual messages, with a prize pool of $100,000.

"These efforts will spur the broader AI research community to test new methods, compare their work, and benchmark their results in order to accelerate work on detecting multimodal hate speech," Facebook said in a blog post. 

Facebook's effort comes as it leans more heavily on AI to filter out objectionable content during the coronavirus pandemic that has sidelined most of its human moderators.

Its quarterly transparency report said Facebook removed some 9.6 million posts for violating "hate speech" policies in the first three months of this year, including 4.7 million pieces of content "connected to organized hate."

Facebook said AI has become better tuned at filtering as the social network turns more to machines as a result of the lockdowns.

Guy Rosen, Facebook vice president for integrity, said that with AI, "we are able to find more content and can now detect almost 90 percent of the content we remove before anyone reports it to us."

Facebook said it made a commitment to "disrupt" organized hateful conduct a year ago following the deadly mosque attacks in New Zealand, which prompted a "call to action" by governments to curb the spread of online extremism.

Automated systems and artificial intelligence can be useful, Facebook said, for detecting extremist content in various languages and analyzing text embedded in images and videos to understand its full context.

Mike Schroepfer, Facebook's chief technology officer, told journalists on a conference call that one of the techniques helping this effort was a system to identify "near-identical" images, to address the reposting of malicious images and videos with minor changes to evade detection.

"This technology can detect near-perfect matches," Schroepfer said.

Heather Woods, a Kansas State University professor who studies memes and extremist content, welcomed Facebook's initiative and inclusion of outside researchers.

"Memes are notoriously complex, not only because they are multimodal, incorporating both image and text, as Facebook notes, but because they are contextual," Woods said.

"I imagine memes' nuance and contextual specificity will remain a challenge for Facebook and other platforms looking to weed out hate speech."

Route 6