AI asked to generate images of ‘the last selfies ever taken’ produces nightmarish results

Humans snapping photos of themselves with melting skin, blood smeared faces and mutated bodies, while standing in front of a world that is burning is what the DALL-E AI believes will be the last selfies taken at the end of times.

DALL-E AI, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions and TikToker Robot Overlords simply asked it to ‘show the last selfie ever taken.’

The nightmarish results each show a human holding a phone and behind them are scenes of bombs dropping, colossal tornados and cities on fire, along with zombies standing in the middle of the destruction.

One of the selfies is a animated image of a man wearing what looks like riot gear. He slowly moves his head around with a look as if his life is flashing before his eyes while bombs fall from the sky around him.

Each of the videos have been viewed hundreds of thousands of times, with users commenting on how horrifying each selfie is – one user felt the images are going to keep them up at night because they are so chilling.

One of the selfies is a animated image of a man wearing what looks like riot gear. He slowly moves his head around with a look as if his life is flashing before his eyes while bombs fall from the sky around him

Other users joked about taking a selfie at the end of times, with one commenting: ‘But first, lemme take a selfie’ (if no one gets this reference I’m gonna cry).’ 

TikTok user Nessa shared: ‘and my boss would still ask if I’m coming into work.’

However, not everyone felt light-hearted about what the end of time would look like.

User named Victeur shared: ‘Imagine hiding in the dark for the war, not having seen your face in years and seeing this when you take a last picture of yourself.’

The nightmarish results each show a human holding a phone and behind them are scenes of bombs dropping, colossal tornados and cities on fire, along with zombies standing in the middle of the destruction

The nightmarish results each show a human holding a phone and behind them are scenes of bombs dropping, colossal tornados and cities on fire, along with zombies standing in the middle of the destruction

The selfies were generated  by a TikToker who told the AI to generate what it thinks the last selfies will look like

The selfies were generated  by a TikToker who told the AI to generate what it thinks the last selfies will look like

Most of the commenters see the fun side of the images, but there has been a dark side uncovered with DALL-E – it is racial and gender bias.

The system is public and when OpenAI launched the second version of the AI  it encouraged people to enter descriptions so the AI can improve on generating images over time, NBC News reports.

However, people started to notice that the images were bias. For example, if a user typed in CEO, DALL-E would only produce images of white males and for ‘flight attendant,’ just images of women were presented.

OpenAI announced last week that it was launching new mitigation techniques to help DALL-E create more diverse images and claims the update ensures users were 12 times more likely to see images with more diverse people

The nightmarish images, which show zombies standing in front of burning cities were created by DALL-E AI

The nightmarish images, which show zombies standing in front of burning cities were created by DALL-E AI

DALL-E, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions and TikToker Robot Overlords simply asked it to 'show the last selfie ever taken.

DALL-E, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions and TikToker Robot Overlords simply asked it to ‘show the last selfie ever taken.

The images are so chilling, some TikTok users said they will now have nightmares after seeing them

The images are so chilling, some TikTok users said they will now have nightmares after seeing them 

The original version of DALL-E, named after Spanish surrealist artist Salvador Dali, and Pixar robot WALL-E, was released in January 2021 as a limited test of ways AI could be used to represent concepts – from boring descriptions to flights of fancy.

Some of the early artwork created by the AI included a mannequin in a flannel shirt, an illustration of a radish walking a dog, and a baby penguin emoji.

Examples of phrases used in the second release – to produce realistic images – include ‘an astronaut riding a horse in a photorealistic style’.

On the DALL-E 2 website, this can be customized, to produces images ‘on the fly’, including replacing astronaut with teddy bear, horse with playing basketball and showing it as a pencil drawing or as an Andy Warhol style ‘pop-art’ painting.

DALL·E 2 has learned the relationship between images and the text used to describe them,’ OpenAI explained.

‘It uses a process called ‘diffusion,’ which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image.’

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

***
Read more at DailyMail.co.uk