- Experts trained robots to write reviews indistinguishable from real ones
- Researchers got 40 volunteers to see if they could spot real and fake reviews
- Robot reviews were given a ‘usefulness’ of 3.15 compared to 3.28 for humans
- This type of AI has the ability to dramatically disrupt certain industries
- Test yourself with eight of the reviews used in the experiment
Robots have written phoney reviews on Yelp that are so convincing they’re almost impossible to distinguish from the real thing.
Scientists created this articulate artificial intelligence system to show how damaging neural networks can be if they are not monitored properly.
If an angry customer or competitor wanted to spam a page with negative reviews it seems one day they could pay a machine to churn fabricated complaints out for them.
Researchers believe this type of AI has the ability to dramatically disrupt certain industries. For example people could pay them to write negative reviews for competitors (stock image)
In order to test how convincing robot reviews were, researchers from the University of Chicago got 40 volunteers to see if they could tell the difference between real and fake reviews for 40 restaurants.
The neural networks were trained using a deep learning technique called recurrent neural networks (RNN).
The network learnt by reading through thousands of real online reviews.
The study concluded AI reviews were ‘effectively indistinguishable’ from real ones.
They were given a ‘usefulness’ rating of 3.15 compared to 3.28 for humans – which shows robot reviews also have the ability to influence human opinions.
‘It remains hard to detect machine-generated reviews using a plagiarism checker without inadvertently flagging a large number of real reviews,’ the researchers wrote.
‘This shows that the RNN does not simply copy the existing reviews from the training set.’
Researchers believe this type of AI has the ability to dramatically disrupt certain industries. For example people could pay them to write negative reviews for competitors.
Researcher believe the threat goes beyond reviews on Yelp.
‘I think the threat towards society at large and really disillusioned users and to shake our belief in what is real and what is not, I think that’s going to be even more fundamental’, Ben Y. Zhao author of the study told Business Insider.
Scientists created this articulate artificial intelligence system to show how damaging neural networks can be if they are not monitored properly (stock image)
Researchers warned it would take someone technically proficient ‘not very long at all’ to create a similar system.
They hope their study will help get ‘more eyeballs and minds looking at the threats of really, really good AI from a more mundane perspective.’
In an email statement, Yelp spokesperson Rachel Youngblade said the company ‘appreciate[s] this study shining a spotlight on the large challenge review sites like Yelp face in protecting the integrity of our content, as attempts to game the system are continuing to evolve and get ever more sophisticated.
‘Yelp has had systems in place to protect our content for more than a decade, but this is why we continue to iterate those systems to catch not only fake reviews, but also biased and unhelpful content’, she said.
Researchers plan to use the study to create technology that can detect and get rid of AI-generated text.
Yelp has been contacted for comment.