Facebook is auto-generating pages for Islamic State and al-Qaida

A whistleblower’s report has revealed that Facebook has inadvertently provided the Islamic State group and al-Qaida with dozens of pages. 

The terrorist organisations have been using the platform as a networking and recruitment tool, it claims. 

The filing obtained by the AP identifies almost 200 auto-generated pages with some for businesses, others for schools or other categories – that directly reference the Islamic State group.

Dozens more represent al-Qaida and other known groups. 

Facebook has recently pledged to clamp down on extremist content on the site, but it appears more work is needed.  

Pages from a confidential whistleblower’s report obtained by The Associated Press, along with two printed Facebook pages that were active oyesterday, are photographed

It seems Facebook has made little progress since an Associated Press report earlier this year detailed how pages that Facebook auto-generates for businesses are aiding Middle East extremists and white supremacists in the United States.

The new details come from an update of a complaint to the Securities and Exchange Commission that the National Whistleblower Center plans to file this week. 

On Wednesday, US senators on the Committee on Commerce, Science, and Transportation will be questioning representatives from social media companies, including Monika Bickert, who heads Facebooks efforts to stem extremist messaging. 

One page listed as a ‘political ideology’ is titled ‘I love Islamic state.’ It features an IS logo inside the outlines of Facebook’s famous thumbs-up icon.

In response to a request for comment, a Facebook spokesperson told the AP: ‘Our priority is detecting and removing content posted by people that violates our policy against dangerous individuals and organisations to stay ahead of bad actors. 

‘Auto-generated pages are not like normal Facebook pages as people can’t comment or post on them and we remove any that violate our policies. 

‘While we cannot catch every one, we remain vigilant in this effort.’

Facebook has a number of functions that auto-generate pages from content posted by users. 

The updated complaint scrutinises one function that is meant to help business networking. 

It scrapes employment information from users’ pages to create pages for businesses. 

In this case, it may be helping the extremist groups because it allows users to like the pages, potentially providing a list of sympathisers for recruiters.

The new filing also found that users’ pages promoting extremist groups remain easy to find with simple searches using their names. 

They uncovered one page for ‘Mohammed Atta’ with an iconic photo of one of the al-Qaida adherents, who was a hijacker in the September 11 attacks. 

The page lists the user’s work as ‘Al Qaidah’ and education as ‘University Master Bin Laden’ and ‘School Terrorist Afghanistan.’

Facebook has been working to limit the spread of extremist material on its service, so far with mixed success. 

The filing obtained by the AP identifies almost 200 auto-generated pages - some for businesses, others for schools or other categories - that directly reference the Islamic State group and dozens more representing al-Qaida and other known groups

 The filing obtained by the AP identifies almost 200 auto-generated pages – some for businesses, others for schools or other categories – that directly reference the Islamic State group and dozens more representing al-Qaida and other known groups

In March, it expanded its definition of prohibited content to include US white nationalist and white separatist material as well as that from international extremist groups. 

It says it has banned 200 white supremacist organisations and 26 million pieces of content related to global extremist groups like IS and al-Qaida.

It also expanded its definition of terrorism to include not just acts of violence attended to achieve a political or ideological aim, but also attempts at violence, especially when aimed at civilians with the intent to coerce and intimidate.

It’s unclear, though, how well enforcement works if the company is still having trouble ridding its platform of well-known extremist organisations’ supporters.

But as the report shows, plenty of material gets through the cracks – and gets auto-generated. 

The AP story in May highlighted the auto-generation problem, but the new content identified in the report suggests that Facebook has not solved it.

The report also says that researchers found that many of the pages referenced in the AP report were removed more than six weeks later on June 25, the day before Bickert was questioned for another congressional hearing.

The issue was flagged in the initial SEC complaint filed by the center’s executive director, John Kostyack, that alleges the social media company has exaggerated its success combatting extremist messaging.

‘Facebook would like us to believe that its magical algorithms are somehow scrubbing its website of extremist content,’ Kostyack said. 

‘Yet those very same algorithms are auto-generating pages with titles like ‘I Love Islamic State,’ which are ideal for terrorists to use for networking and recruiting.’

HOW DOES FACEBOOK MODERATE ITS CONTENT?  

Currently Facebook relies on human reviewers and moderators and in some cases – like those relating to ISIS and terrorism – automatic removal for offensive and dangerous activity.  

The manual moderation relies largely on an appeals process where users can flag up concerns with the platform which then reviews it through human moderators.  

However,  none of the 200 viewers of the live broadcast of the Christchurch, New Zealand terror shooting flagged it to Facebook’s moderators. 

In some cases, Facebook automatically removes posts using an AI driven algorithm indicates with very high confidence that the post contains support for terrorism including ISIS and al-Qaeda.  

But the system overall still relies specialised reviewers to evaluate most posts, and only immediately remove posts when the tool’s confidence level is high enough that its ‘decision’ indicates it will be more accurate than that of humans.

According to ex-Facebook executive Monika Bickert, its machine learning tools have been critical to reducing the amount of time terrorist content reported by users stays on the platform from 43 hours in the first quarter of 2018 to 18 hours in the third quarter of 2018.  

Ms Bickert added: ‘At Facebook’s scale neither human reviewers nor powerful technology will prevent all mistakes.’

Read more at DailyMail.co.uk