AI will create a life-like ‘false reality’

Using artificial intelligence experts have created a ‘false reality’ that is so similar to real-life you may not be able to tell it is a simulation.

New advancements in graphics manipulations made by neural networks mean artificial simulations look deceptively like the real thing.

Developers say in the future AI-generated scenes could be used to create training data for self-driving cars.

However, this technology also has a darker side and could lead us into strange hyper-reality where simulation becomes indistinguishable from real life.

Advancements in graphics manipulations mean simulations are more realistic than ever before. On the left is a real-life winter image and on the right is an AI-generated summer image

HOW DOES IT WORK?

The system relies on generative adversarial networks (GANs).

Researchers at the Google Brain AI lab first developed GAN which uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.

GAN consists of two neural networks that learn from looking at raw data.

One looks at the raw data – in this case the real life scene – while the other generates  fake images based on the data set.

Researchers from Santa Clara-based technology company Nvidia have created images that show AI generated scenes created from real ones. 

‘We present high-quality image translation results on various challenging unsupervised image translation tasks, including street scene image translation, animal image translation, and face image translation,’ the company website said. 

Researchers led by Ming-Yu Liu used ‘image-to-image’ translations to transform an outdoor winter image into an AI-generated summer scene. 

They could also transform sunny weather into wet weather.

The system relies on generative adversarial networks (GAN).

Researchers at the Google Brain AI lab first developed GAN which consists of two neural networks that learn from looking at raw data.

It uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information. 

In the future AI-generated scenes could be used to generate training data for self-driving cars. On the left is a real image from a sunny day and on the right is the fake image, created using GANs

In the future AI-generated scenes could be used to generate training data for self-driving cars. On the left is a real image from a sunny day and on the right is the fake image, created using GANs

One looks at the raw data – in this case the real life scene – while the other generates fake images based on the data set. 

‘The use of GANs isn’t novel in unsupervised learning, but the NVIDIA research produced results — with shadows peeking through thick foliage under partly cloudy skies — far ahead of anything seen before’, researchers led by Mr Lui wrote in a blog post.

‘For self-driving cars alone, training data could be captured once and then simulated across a variety of virtual conditions: sunny, cloudy, snowy, rainy, nighttime, etc’, researchers wrote.  

Researchers led by Ming-Yu Liu used 'image-to-image' translations to transform an outdoor winter image into an AI-generated summer scene. Pictured is the real image in snow on the left and the AI-generated summer image on the right

Researchers led by Ming-Yu Liu used ‘image-to-image’ translations to transform an outdoor winter image into an AI-generated summer scene. Pictured is the real image in snow on the left and the AI-generated summer image on the right

However, this technology could also result in fabricated ‘video evidence’ which could be used wrongly as evidence of wrong-doing, warns Info Wars.

This could lead us into a sort of hyper reality, where consciousness can no longer distinguish between what we know and what is simulation.

Earlier this year researchers found four in ten people couldn’t tell a fake picture from a real one.

Even those that did notice something wrong could only spot what it was 45 per cent of the time.

‘Our study found that although people performed better than chance at detecting and locating image manipulations, they are far from perfect’, said Sophie Nightingale, PhD Student and lead author from the University of Warwick.

‘This has serious implications because of the high-level of images, and possibly fake images, that people are exposed to on a daily basis through social networking sites, the internet and the media’. 

 

 



Read more at DailyMail.co.uk