Human content moderation: A necessary evil or a job of the past?

The explosive growth of social media has upsides and downsides. One serious problem is the need to moderate the influx of user-generated material. The moderators, many of them in developing countries, are the hidden victims of a booming industry.

Let's hope what they are watching has gone through some moderation.
Getty Images

Let's hope what they are watching has gone through some moderation.

Almost 300 hours of video is being uploaded to YouTube every single minute. The company's slogan is "Broadcast Yourself" - and considering that some of the 8 billion selves, include murderers, perverts, weirdos and lizard people (we are looking at you Mark Zuckerberg), you can imagine how sickening some of those videos are. 

However, for normal human beings like us to browse “safely” on the interweb, sensitive content needs to be flagged - either by someone or something.

Despite incredible advancements in content flagging, and the sensitive content detection technologies adopted by companies like YouTube, there are still a lot of people, like you and me, involved in this process. I don’t know about you, but watching 10 hours of paint drying or grass growing - yes, those are real videos - would drive me crazy in a couple of hours.

Fun parts aside, being a content moderator takes an enormous psychological toll. In a Wall Street Journal report, a former moderator described a video he checked out that contained a microwave and a cat. Distressing content can also be found when moderating text platforms, due to the amount of rape and abuse references made. 

Loading...

Now, let’s take a step back and break down the issue of having to moderate these platforms. It's a must for these platforms to be moderated by someone, but should we simply sacrifice their emotional well-being for our betterment? What would happen if we were to ignore moderation by humans completely? (Quick answer: Elsagate)

The ease with which anyone with a connection can upload content to the internet means that there's deeply shocking stuff in the mix (depictions of graphic child abuse, bestiality, torture, murder, suicide and more). So the solution the tech companies came up with was to hire lots of people to look through all the potentially disturbing or flagged content and sort it out. 

The sheer volume of this problem means there needs to be a bunch of people working really fast to be able to keep up. And, as it's in the company’s best interests to keep wages and benefits as low as possible, it isn't hard to imagine the implications of subjecting your employees to sensitive material and the legal liability it would bring. 

For example, Microsoft was sued by two of its content moderators who accused the company of giving them PTSD after repeatedly being exposed to child pornography. In an effort to avoid such things from happening (at home), in an extremely questionable move ethically, companies outsource much of the work mostly to India and the Philippines.

Although sites like Facebook and Amazon would have us believe content moderation is done in the name of creating a safe space to cultivate dialogue, freedom of speech and diversity of thought, it is, in fact, all about the money. If there's a lot of disturbing content users inadvertently see, then those same users wouldn’t be as comfortable spending much time there. Your lost attention means loss of advertising revenue for those companies.

So your emotional well-being is kind of a major concern to tech companies. The motives become a lot clearer when you take a look at international policies. Leaked FB documents show that anti-Holocaust speech is only to be deleted in 4 countries where it's illegal (France, Austria, Germany, and Israel) in order to avoid geoblocking of the site in those countries.  

Let’s take AI and have it filter it out?

AI can only do so much. Machine learning algorithms have been getting better and better at filtering sensitive content, but they still make major mistakes. In the Elsagate scandal, content which showed adults dressed as cartoon characters (known as 'cosplay') acting out very disturbing scenarios was circulated by YouTube's algorithm as suggested videos for children who watch Frozen or Spiderman videos on the site. 

Machines are also really bad at understanding context. A famous example is the iconic photo showing naked Vietnamese children running away from a napalm attack. The picture was removed for violating company policy and that caused a big stir due to the picture's historical importance.

Outsourcing?

As for outsourcing content moderation to the developing world, the bottom line is that Silicon Valley is guilty at the very least of exploitation, subjecting vulnerable people to despicable, vile content no one should ever have to see. Monitoring work is rendered completely invisible and not at all properly compensated - with either pay or benefits such as mental healthcare. Many people seem to go into this sector because they're desperate for work and many of them quit because they simply can't take it.

The solution?

We don’t know the long-term effects or the sheer scale of this issue because of non-disclosure agreements companies require moderators to sign. In the meantime, companies and programmers are working on better technology and code to detect this type of content before any human has to encounter it (even moderators).

But that means the better tech can potentially also be used in other ways, such as automating censorship, even make whole user communities cease to be able to exist online through no fault of their own.

The solution seems to lie in a sweet spot somewhere between AI and human collaboration, within the framework of reasonable regulation. How to get there is hard to tell, especially given the borderless world of online.

Bonus material:

These are some of the weird videos we came across that moderators deal with on a daily basis (we actually filtered them out, these are only moderately weird). Enjoy!

Loading...
Loading...
Loading...
Route 6