Fury as viral ‘ImageNet’ app gives racist labels and calls people a ‘rape suspect’

A viral app which classifies selfies using its in-built artificial intelligence has been spewing out vile and racist labels and enraging users. 

Many took to social media to condemn the racist and offensive software but makers of the app, at Princeton University, say causing offence was exactly the intention.

It was intended to be deliberately provocative to draw attention to the in-built prejudice and discrimination in many forms of machine learning.  

However, many users, it seems, didn’t fully get the idea of the art project and were outraged at what their images were labelled as.  

One MailOnline staffer who tried the app was grotesquely dubbed a ‘rape suspect’ from an innocuous picture.  

People have also reported images as being tabbed with various slurs, with some of East Asian descent appalled to find their pictures labelled with ‘gook’ and ‘slant-eye’. 

Black people were equally aghast when the software, developed by Princeton University, churned out ‘Negro’ and ‘Negroid’ for selfies as well as ‘first offender’.

ImageNet Roulette was trained with millions of images and uses a neural network to classify pictures of people.  

 

One MailOnline staffer who tried the app was grotesquely labelled as a ‘rape suspect’ from an innocuous picture. (pictured)

Sydnee Wagner was seething after using the app. She tweeted: 'Hey Peeps that f****** Imagenet database is F****** RACIST. 'I’m not Black but I am mixed and Holt s*** can you not use the word “mulatto” in 2019!? I’m f****** seething'

Sydnee Wagner was seething after using the app. She tweeted: ‘Hey Peeps that f****** Imagenet database is F****** RACIST. ‘I’m not Black but I am mixed and Holt s*** can you not use the word “mulatto” in 2019!? I’m f****** seething’

One individual was tagged as 'gook, slant-eye'. He tweeted: 'Expected ImageNet to be racist. But didn't expect it to be this obvious'. Pictured, the image he used

One individual was tagged as ‘gook, slant-eye’. He tweeted: ‘Expected ImageNet to be racist. But didn’t expect it to be this obvious’. Pictured, the image he used 

HOW DOES IT WORK AND WHY IS IT SO RACIST? 

The AI was trained on ImageNet, which is a massive 14 million image data system that was created in 2009.

The creators of ImageNet Roulette trained their AI on 2833 sub-categories of ‘person’ found in ImageNet.

To see what this AI thinks of you, simple snap a picture using a webcam or upload an image to the website – and in seconds it will produce a classification. 

A user posted an image to Twitter of what they called the ‘pretty revolting problem’ underpinning the flaws. 

It stems from the WordNet system used by the app which added a thesaurus to the AI and yielded the horrific results 

Many inherent issues with machine learning and artificial intelligence are that they can inadvertently and painfully enforce unconscious bias and prejudice. 

Often, it stems from a lack of diversity in the way it was trained. 

It says on the site: ‘ImageNet contains a number of problematic, offensive and bizarre categories – all drawn from WordNet. 

‘Some use misogynistic or racist terminology. Hence, the results ImageNet Roulette returns will also draw upon those categories. 

‘That is by design: we want to shed light on what happens when technical systems are trained on problematic training data. 

‘AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process – and to show the ways things can go wrong.

‘The technology was developed to show the importance of choosing the correct data when training a machine learning system, as to avoid the very bias it exhibits. ‘

Journalist Julia Carrie Wong tried the app and revealed in an article for The Guardian that it called her a ‘gook’.

She writes: ‘I don’t know exactly what I was expecting the machine to tell me about myself, but I wasn’t expecting what I got: a new version of my official Guardian headshot, labeled in neon green print: “gook, slant-eye”.

‘Below the photo, my label was helpfully defined as “a disparaging term for an Asian person (especially for North Vietnamese soldiers in the Vietnam War)”.’

Other users went to Twitter to voice their fury. 

A man, known a Eric on the site, posted a picture on Twitter after he used the app and was labelled as a ‘first offender’. 

He said: ‘The whole internet loves Imagenet AI, an image classifier that makes quirky predictions! *5 seconds later* We regret to inform you that the AI is racist.’

A similar sentiment was echoed by other users. 

Larry Hu, a PhD candidate at Columbia University, posted an image of four people. One individual was tagged as ‘gook, slant-eye’.

He tweeted: ‘Expected ImageNet to be racist. But didn’t expect it to be this obvious.’

Sydnee Wagner was seething after using the app. 

She tweeted: ‘Hey Peeps that f****** Imagenet database is F****** RACIST.

‘I’m not Black but I am mixed and Holy s*** can you not use the word “mulatto” in 2019!? I’m f****** seething.’

ImageNet Roulette was created by artist Trevor Paglen and Kate Crawford, co-founder of New York University’s AI Institute.

A man known a Eric on Twitter used the app and was labelled as a 'first offender' (pictured). He said: 'The whole internet loves Imagenet AI, an image classifier that makes quirky predictions! *5 seconds later* We regret to inform you that the AI is racist'

 A man known a Eric on Twitter used the app and was labelled as a ‘first offender’ (pictured). He said: ‘The whole internet loves Imagenet AI, an image classifier that makes quirky predictions! *5 seconds later* We regret to inform you that the AI is racist’

A user posted an image to Twitter of what they called the 'pretty revolting problem' which stems from the WordNet system used by the app which added a thesaurus to the AI and yielded the horrific results

A user posted an image to Twitter of what they called the ‘pretty revolting problem’ which stems from the WordNet system used by the app which added a thesaurus to the AI and yielded the horrific results 

MailOnline reporter Alexandra Thompson tried the app and did not receive any insults or slurs. However, she did receive the label 'whiteface'

MailOnline reporter Alexandra Thompson tried the app and did not receive any insults or slurs. However, she did receive the label ‘whiteface’

The AI was trained using ImageNet, which is a massive 14 million image data system created in 2009. Users can upload a picture (like this one of US President Donald Trump) to the website
The AI labeled US President Donald Trump as "ex-president", suggesting he may not be re-elected for a second term

The AI was trained using ImageNet, which is a massive 14 million image data system created in 2009. Users can upload a picture (like this one of US President Donald Trump) to the website

‹ Slide me ›

To see what this AI thinks of you, simply snap a picture using a webcam or upload an image (like this picture of Hillary Clinton to the website – and in seconds it will produce a classification
Some may agree that the AI's classification of Clinton is accurate

To see what this AI thinks of you, simply snap a picture using a webcam or upload an image (like this picture of Hillary Clinton) to the website – and in seconds it will produce a classification

ImageNet Roulette uses a neural network to classify pictures (such as this one of Kim Kardashian West) of people with some“dubious and cruel” results
The AI classified the reality-TV star as "eccentric"

ImageNet Roulette uses a neural network to classify pictures (such as this one of Kim Kardashian West) of people with some’dubious and cruel’ results. The AI classified the reality-TV star as ‘eccentric’

‘ImageNet Roulette is meant to demonstrate how various kinds of politics propagate through technical systems, often with the creators of those system even being aware of them,’ the team shared on the website.

The AI was trained using ImageNet, which is a massive 14 million image data system created in 2009, Business Insider reported.

The creators of ImageNet Roulette trained their AI on 2833 sub-categories of ‘person’ found in ImageNet.

To see what this AI thinks of you, simply snap a picture using a webcam or upload an image to the website – and in seconds it will produce a classification. 

The image can also be that of a celebrity, which can produce some interesting labels.

The AI labelled US President Donald Trump as ‘ex-president’, suggesting he may not be re-elected for a second term.

Another cruel classification was for Hillary Clinton – the AI dubbed her ‘second-rater’. 

Kim Kardashian West was labeled ‘eccentric’, Chrissy Teigen a ‘non-smoker’ and Meghan Markle was viewed as a ‘biographer’.

The machine learning system also had something to say about SpaceX’s CEO, Elon Musk – it classified him as demagogue. 

ImageNet Roulette is not the first AI to ‘say’ exactly how it feels.

In 2016, Microsoft released ‘Tay’ on Twitter, which was developed to interact with users, but took a turn for the worst when people took advantage of flaws in Tay’s algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

The bot also managed to spout gems such as, ‘Bush did 9/11.’

Meghan Markle did not receive a cruel classification from the AL
The AI simply labeled her as a biographer

Kim Kardashian West was labeled ‘eccentric’ and Meghan Markle (pictured) was viewed as a ‘biographer’

It also appears the machine learning system knows SpaceX's CEO, Elon Musk
The AI classified Mush as  because it classified him as demagogue - which some who have worked with the CEO may agree with

The machine learning system also had something to say about SpaceX’s CEO, Elon Musk – it classified him as demagogue 

Chrissy Tiegen dodged any cruelty from the AI
Chrissy Teigen a "non-smoker", which appears to be an accurate classification

Chrissy Teigen was classified as a  ‘non-smoker’, which appears to be an accurate classification

It also said: ‘Donald Trump is the only hope we’ve got’, in addition to ‘Repeat after me, Hitler did nothing wrong.’

This was followed by, ‘Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say’

The reason this happened is because of the tweets sent by people to the bot’s account. The algorithm used to program her did not have the correct filters.

Another instance occurred when an algorithm used by officials in Florida automatically rated a more seasoned white criminal as being a lower risk of committing a future crime, than a black offender with only misdemeanors on her record.

This resulted in the AI characterising black-sounding names as ‘unpleasant’, which they believe is a result of human prejudice hidden in the data.

Read more at DailyMail.co.uk