Tech firms have been urged to show that they are removing extremist content more rapidly or face legislation forcing them to do so, new EU guidelines reveal.
Google, YouTube, Facebook, Twitter and others have been given three months to clean up their acts and tackle terrorist material published on their sites.
The guidance, which is not legally binding, includes a call to remove such material within an hour of being notified of its existence.
Failure to comply could result in new laws being brought in to make it a mandatory requirement.
Tech firms have been urged to show that they are removing extremist content more rapidly or face legislation forcing them to do so, new EU guidelines reveal. Google, YouTube, Facebook, Twitter and others have been given three months to clean up their acts
New measures include a call to remove such material within an hour of being notified of its existence. This image shows masked figures marching in a recruitment video for the banned far-Right National Action, which has since been taken down from YouTube
In its strongest call yet the European Commission, based in Brussels, today recommended a range of new measures that online platforms should take to stop the proliferation of extremist content.
European governments have said that extremist content on the web has influenced lone-wolf attackers who have killed people in several European cities after being radicalised.
Several governments have increased pressure on social media companies to do more to remove illegal content.
This includes material related to groups like Islamic state as well as individual incitements to commit atrocities.
‘While several platforms have been removing more illegal content than ever before, we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens’ security, safety and fundamental rights,’ digital commissioner Andrus Ansip said in a written statement.
The recommendation, which is non-binding but could be taken into account by European courts, sets guidelines on how companies should remove illegal content generally.
This ranges from copyright infringements to hate speech and advises a quicker reaction to extremist material.
The Commission said it would assess the need for legislation within three months for what it described as ‘terrorist content’, given the urgency of the issue.
WHAT ARE THE EU’S RECOMMENDATIONS TO REMOVE TERRORIST CONTENT ONLINE?
The EU has unveiled a recommendation that sets out measures to ensure faster detection and removal of illegal content online. They include:
Clearer ‘notice and action’ procedures: Companies should set out easy and transparent rules for notifying illegal content, including fast-track procedures for ‘trusted flaggers’.
More efficient tools and proactive technologies: Companies should set out clear notification systems for users.
Stronger safeguards to ensure fundamental rights: To ensure that decisions to remove content are accurate and well-founded, especially when automated tools are used, companies should put in place effective and appropriate safeguards, including human oversight and verification.
Special attention to small companies: The industry should, through voluntary arrangements, cooperate and share experiences, best practices and technological solutions.
Closer cooperation with authorities: If there is evidence of a serious criminal offence or a suspicion that illegal content is posing a threat to life or safety, companies should promptly inform law enforcement authorities.
One-hour rule: Considering that terrorist content is most harmful in the first hours of its appearance online, all companies should remove such content within one hour from its referral as a general rule.
Faster detection and effective removal: In addition to referrals, internet companies should implement proactive measures, including automated detection, to effectively and swiftly remove or disable terrorist content and stop it from reappearing once it has been removed.
Improved referral system: Fast-track procedures should be put in place to process referrals as quickly as possible, while Member States need to ensure they have the necessary capabilities and resources to detect, identify and refer terrorist content.
Regular reporting: Member States should on a regular basis, preferably every three months, report to the Commission on referrals and their follow-up as well as on overall cooperation with companies to curb terrorist online content.
Several governments have increased pressure on social media companies like Twitter to do more to remove illegal content. This includes material related to groups like Islamic state as well as individual incitements to commit atrocities
For all other types of illegal content it will assess progress made within six months.
It also called on the technology sector, which is dominated by U.S. companies, to adopt proactive measures such as automated detection to rid their platforms of illegal content.
European Digital Rights, a civil rights group, described the Commission’s approach as putting internet giants in charge of censoring Europe.
Only legislation would ensure democratic scrutiny and judicial review, they said.
‘The European Commission is pushing “voluntary” censorship to internet giants to avoid legislation that would be subject to democratic scrutiny and judicial challenge,’ said Joe McNamee, executive director of the group.
‘Today’s Recommendation institutionalises a role for Facebook and Google in regulating the free speech of Europeans.
‘The Commission needs to be smart and to finally start developing policy based on reliable data and not public relations spin,’ he added.
Luxury groups, meanwhile, welcomed the Commission’s move saying action by online platforms is also necessary to fight the sale of counterfeit goods online.
‘Proactive measures coupled with good consumer information is the only way to effectively deal with illegal content online,’ said Toni Belloni, group managing director of LVMH.