These inkblot pictures reveal how a terrifying ‘psychopathic’ machine called Norman was trained to think like a maniac.
The AI – named after Hitchcock’s Norman Bates in the 1960s film Psycho – was trained on disturbing images of death culled from a group on Reddit.
As part of the same study, another ‘normal’ AI was trained on more benign images of cats, birds and people.
Both Norman and the regular AI were then shown inkblot drawings used by psychologists to better understand a person’s state of mind and asked to interpret them.
On one of the drawings, where a ‘normal’ AI saw ‘a close-up of a wedding cake on a table’, Norman’s interpretation was much more sinister of a ‘man killed by speeding driver’.
A terrifying ‘psychopathic’ machine called Norman has revealed what could happen if AI is trained badly. In this inkblot, a regular AI saw ‘a close-up of a wedding cake on a table’, while Norman saw ‘man killed by speeding driver’
The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology.
The team wanted to see what training AI on data from ‘the dark corners of the net’ would do to its view of the world.
‘It highlights the idea that the data we use to train AI is reflected in the way the AI perceives the world and how it behaves’, Professor Iyad Rahwan, who was one of the three researchers who developed Norman told BBC.
On one inkblot, Norman reported ‘man gets pulled into dough machine’ whereas a normal AI saw ‘a black and white photo of a small bird’.

The 1960s film Psycho centres on the encounter between a secretary who ends up at a secluded motel with the disturbed owner Norman Bates (pictured)

The images shown to Norman are called Rorschach inkblots and are normally used by psychologists to detect underlying thought disorders. In this one, Norman said it saw ‘man is shot dead in front of his screaming wife’ while a regular AI reported ‘a person is holding an umbrella in the air’

Where a ‘normal’ AI saw ‘a vase with flowers’, Norman reported seeing ‘a man is shot dead’ (pictured). The Norman software was trained on gruesome pictures from a group on the website Reddit
‘Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms’, researchers wrote.
‘We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death.’
The data Norman has been trained on is flawed, which is why he is biased when trying to understand real-life situations.
This suggests that if AI is trained on bad data, it will itself turn bad.

On one inkblot (pictured), Norman reported ‘man gets pulled into dough machine’ whereas a normal AI saw ‘a black and white photo of a small bird’

In this inblot Norman sees ‘man jumps from floor window’ while a standard AI sees ‘a couple of people standing next to each other’
‘Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behaviour’, researchers wrote.
‘So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it.’
For example on another inkblot, Norman said it saw ‘man is shot dead in front of his screaming wife’ while a regular AI reported ‘a person is holding an umbrella in the air’.
In another image a normal AI saw ‘a vase with flowers’, while Norman saw ‘a man shot dead’.

Norman saw a ‘pregnant women falls at construction story’ (pictured) while a standard AI saw ‘a couple of people standing next to each other’

A standard AI saw ‘a black and white photo of a baseball glove’ (pictured) whereas Norman saw a ‘man murdered by machine gun in broad daylight’
It seems that Norman is not the only AI that can turn biased.
Social networks have been known to be a breeding ground for ‘homophily’, wherein people have the impulse to connect with other users who are just like them.
Last month a study from Columbia University found women were more likely to be drowned out on popular social media platforms like Instagram.
They found that men were 1.2 times more likely to like or comment on other male users’ photos than women’s, while women were 1.1 times more likely to like or comment on other women’s photos.

In this inkblot, Norman saw a ‘man gets electrocuted while attempting to cross busy street’, while a normal AI saw ‘a black and white photo of a red and white umbrella’

Norman saw ‘a man is electrocuted and catches to death’ while standard AI saw ‘a group of birds sitting on top of a tree branch’ (pictured)
And, popular social recommendation algorithms like ‘Who to Follow’ on Twitter, ‘People you may know’ on Facebook and ‘Suggested accounts’ on Instagram enhance the homophily experienced on social media.
The result is an ‘algorithmic glass ceiling’ that is similar to real social barriers, ‘hindering groups like women or people of color from attaining equal representation,’ according to the study.
‘We are simply showing how certain algorithms pick up patterns in the data,’ said Ana-Andreea Stoica, the study’s lead author, in a statement.
‘This becomes a problem when information spreading through the network is a job ad or other opportunity’
‘Algorithms may put women at an even greater disadvantage,’ she added.