Report: Facebook moderators becoming ‘addicted’ to extreme content in quest to make platform safer

Facebook moderators are becoming ‘addicted’ to extremist and fringe content as they work to rid platform of child porn, violence and more, report claims

  • A new report highlights more perils faced by Facebook’s content moderators 
  • Some moderators are becoming ‘addicted’ to extreme content say sources
  • Others have had their political and social views swayed by content
  • Combating child exploitation has become a major focus of moderators 
  • Long hours, workloads, and inadequate counseling have plagued moderators 

Combating the scourge of extreme content on Facebook is taking a heavy psychological toll on some of the company’s third-party moderators, says a new report.  

According to Berlin-based contractors interviewed by The Guardian, constant exposure to the platform’s underbelly has caused moderators to become ‘addicted’ to graphic content, leading some to accumulate troves of objectionable media for their own personal archives. 

In some cases, sources say the work has even influenced moderators’ political and social views, driven primarily by the frequent consumption of fake news and hate speech that floods the platform.

A report from The Guardian further highlights the perils faced by moderators who are tasked with ridding the platform of toxic content. File photo

Often times sources in the report say their work also involved poring over Facebook’s private messaging app, in an attempt to prevent acts of sexual abuse against children — an algorithm employed by Facebook flags conversations that it thinks may involve sexual exploitation.

‘You understand something more about this sort of dystopic society we are building every day,’ said one moderator quoted by The Guardian who asked to remain anonymous after signing a non-disclosure agreement. 

‘We have rich white men from, from the US, writing to children from the Philippines … they try to get sexual photos in exchange for $10 or $20.’ 

Contractors also complained of the sheer volume of content that they were tasked with reviewing. 

According to The Guardian, moderators were told to meet a benchmark of reviewing 1,000 items over the course of an eight-hour shift, which equates to one issue every 30 seconds.

That number has since been reduced to a quota of between 400-500 items per day following a blockbuster report from The Verge in February that first detailed Facebook’s third-party moderation centers. 

In that prior report and a subsequent follow up in June, The Verge’s Casey Newton detailed centers in Tampa and Arizona in which workers at Facebook contractor Cognizant were similarly faced with strenuous work conditions and hours, often to the detriment of their own mental health.

Companies contracted by Facebook have taken flack for its treatment of moderators in the US after reports from The Verge's Casey Newton.

Companies contracted by Facebook have taken flack for its treatment of moderators in the US after reports from The Verge’s Casey Newton.

One particularly harrowing incident described by Newton’s report highlighted the in-office death of one of the company’s employees, Keith Utley. Employees interviewed by The Verge say the job’s stress contributed to a heart attack that led to his passing. 

Both reports also highlighted employees’ criticism of mental health professionals available on-site, saying their ability to handle workers’ stress and anxiety was inadequate. 

While Facebook moderators play a crucial roll in ridding the platform of toxic content, recent reports have worked to highlight the human toll the platform can and does have on those in its orbit.

Sources in The Guardian suggest that to help mitigate detrimental effects on moderators that Facebook should hire more workers.  

HOW ARE TECH FIRMS FIGHTING HATE SPEECH?

According to recent EU figures, Facebook, Twitter and Google’s YouTube have greatly accelerated their removals of online hate speech. 

Microsoft, Twitter, Facebook and YouTube signed a code of conduct with the EU in May 2016 to review most complaints within a 24-hour timeframe. 

Now, the firms review over two thirds of complaints within 24 hours.

Of the hate speech flagged to the companies, almost half of it was found on Facebook, the figures show, while 24 percent was on YouTube and 26 percent on Twitter.

The most common ground for hatred identified by the Commission was ethnic origins, followed by anti-Muslim hatred and xenophobia, including hatred against migrants and refugees. 

 

Read more at DailyMail.co.uk