A top Facebook executive said Sunday that the tech giant is willing to let regulators access its algorithms ‘to ensure its algorithms are performing as intended and are not harming users.’
Nick Clegg, vice president of global affairs, was a guest on CNN’s ‘State of the Union’ where he defended the technology saying, ‘people would see more hate speech [without the algorithms] because these algorithms are designed to work precisely like giant spam filters that identify and deprecate bad content.’
Speaking with CNN’s Dana Bash, Clegg reminded her ‘to remember technology has downsides, but also has very, powerful positive effects.’
Clegg also said Facebook is open to changing Section 230 of the Communications Decency Act, which protects companies from being held accountable for what users post.
Section 230 says that ‘No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.’
Clegg posited that the way to change Section 230 is to hold online companies like Facebook liable for upholding their policies and applying systems, and if they fail, that protection should be removed.
Speaking with CNN’s Dana Bash, Nick Clegg, vice president of global affairs, defended Facebook’s algorithms. He said if not for the technology, people would see more hate speech
Clegg continued to explain that the social media company is willing to limiting those protections, ‘contingent on them applying the systems and their policies as they’re supposed to,’ he said, as reported by Bloomberg.
The interview comes after a whistleblower, Frances Haugen, accused the firm of ignoring the dangers of social media on young kids.
Haugen, a former Facebook employee, came forward last week when she made an appearance on CBS’ 60 Minutes to accuse the company of contributing to the January 6 riots and hiding what it knew about how its products hurt people, particularly children.
Testifying in front of the U.S. Senate, Haugen said Facebook’s leadership knows how to make both Facebook and Instagram safer, but fails to do so because ‘they have put their immense profits before people.’
However, Clegg defended the company’s stance, telling CNN he believes Facebook is doing its best to filter out harmful content and has paused the upcoming ‘Instagram Kids, which he said ‘is part of the solution.’
The interview comes after a whistleblower, Frances Haugen (pictured) accused the firm of ignoring the dangers of social media on young kids
‘It should not be acceptable to anyone when a teenager is in distress when they use any form of communication, but what Frances Haugen was talking about is something we’ve known, everyone has known for a long period of time.’
‘External researchers have also confirmed this for some time that the overwhelming majority of teenagers using Instagram have a positive experience, even when they are suffering from sleeplessness, depression or anxiety.’
However, Clegg seemed less sure on his answers when pressed about if Facebook helped inspire the January 6 Capitol riot.
‘I can’t give you a yes or no answer to the individual, personalized feeds that each person uses,’ said Clegg while blaming the event on those who took part and politicians who may have inspired it.
Clegg told Bash on Sunday that he believes Facebook is doing its best to filter out harmful content and has paused the upcoming ‘Instagram Kids, which he said ‘is part of the solution.’ Pictured is Clegg with Facebook founder Mark Zuckerberg
Bash then asked Clegg if it was problematic that ‘you’re not really sure if your platform allowed it to fester and amplify what ended up as this huge attack?’
Clegg responded by saying: ‘Each person’s news feed is individual to them’ and that he could not give her a ‘generic answer’ on Facebook’s role in pushing election misinformation to those who took part in the riot.
‘But if I may, if our algorithms are so nefarious as some people suggest, why is it that it’s precisely those systems that have succeeded to reduce hate speech, the prevalence of hate speech on our platforms to as little as 0.05 percent?’ he asked.
‘That means that for every 10,000 bits of content, you would only see 5 bits of hate speech. I wish we could limit it to zero.’