Ex-YouTube employee reveals how the site’s recommendation AI creates a ‘toxic’ cycle to promote extreme or inappropriate content
- Software engineer Guillaume Chaslot revealed the details in an op-ed for Wired
- He explained how the algorithm improves over time with user engagement
- This creates a feedback loop which promotes curated content more precisely
YouTube has found itself embroiled in more than a few controversies in recent years thanks to its ‘Recommended for You’ feature, which has been criticized for promoting violent and extremist content and, most recently, inappropriate videos of children.
It’s a problem that the firm has been scrambling to correct – but also one that could have been anticipated, according to an engineer who worked on the system.
In an op-ed for Wired, former Google employee Guillaume Chaslot says the root of the issue lies in the design of the recommendation algorithm itself.
The system, Chaslot explains, is built to predict and curate content geared toward the user’s specific interests – be those innocent or nefarious – and gets better and better at its job the more you engage with it.
YouTube has found itself embroiled in more than a few controversies in recent years thanks to its ‘Recommended for You’ feature, which has been criticized for promoting violent and extremist content and, most recently, inappropriate videos of children. File photo
According to Chaslot, this means it inherently comes with a ‘toxic potential.’
The software engineer worked on the AI for roughly a year between 2010 and 2011, and has retrospectively concluded that the issues coming to light now were ‘not unpredictable,’ even if unintentional, he wrote in an essay for Wired in July.
‘In some cases, the AI went terribly wrong,’ Chaslot says.
Chaslot points to examples such as terrorist content and suggestive videos of children, which prompted widespread backlash and caused Disney and other companies to pull their ads.
The problem, according to the software engineer, can be found in the system’s engagement metrics.
As people who those videos are aimed at interact with the recommendations, the AI becomes more precise in its suggestions.
Not only does that mean it will be better at recommending that content to that user, but it will also be less likely to show those videos to people who wouldn’t want to see them, Chaslot says.
This is what’s known as a feedback loop.
‘At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported,’ Chaslot writes.
‘In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it.
‘Without him, the cycle could have continued for years.’