Fake faces created by AI look MORE trustworthy than real people, study reveals 

Fake faces created by artificial intelligence (AI) look more trustworthy than faces of real people, a worrying new study reveals.   

Researchers conducted several experiments to see whether fake faces created by machine learning frameworks were able to fool humans.  

They found synthetically generated faces are not only highly photo realistic, but are also nearly indistinguishable from real faces – and are even judged to be more trustworthy. 

Due to the results, the researchers are calling for safeguards to prevent ‘deepfakes’ from circulating online. 

Deepfakes have already been used for so-called ‘revenge porn’, fraud and propaganda, leading to misplaced identity and the spread of fake news. 

Real or synthesized? This composite shows the most (top eight) and least (bottom eight) accurately classified real (R) and synthetic (S) faces in the study

HOW DO GENERATIVE ADVERSARIAL NETWORKS WORK?

Generative Adversarial Network work by pitting two algorithms against each other, in an attempt to create convincing representations of the real world.

These ‘imagined’ digital creations – which can take the form of images, videos, sounds and other content – are based on data fed to the system.

One AI bot creates new content based upon what it has been taught, while a second critiques these creations – pointing out imperfections and inaccuracies.

And the process could one day allow robots to learn new information without any input from people. 

The new study was conducted by Sophie J. Nightingale at Lancaster University and Hany Farid at the University of California, Berkeley. 

‘Our evaluation of the photo realism of AI-synthesized faces indicates that synthesis engines have passed through the “uncanny valley” and are capable of creating faces that are indistinguishable – and more trustworthy – than real faces,’ they say.  

‘Perhaps most pernicious is the consequence that in a digital world in which any image or video can be faked, the authenticity of any inconvenient or unwelcome recording can be called into question.’ 

For the study, the experts used fake faces that were created with StyleGAN2, a ‘generative adversarial network’ from US tech company Nvidia.

Generative adversarial networks (or GANs) work by pitting two algorithms against each other, in an attempt to create convincing representations of the real world. 

In the first experiment, 315 participants classified 128 faces taken from a set of 800 as either real or synthesized. 

Their accuracy rate was 48 per cent, close to a chance performance of 50 per cent, they found.

In a second experiment, 219 new participants were trained and given feedback on how to classify faces. 

They classified 128 faces taken from the same set of 800 faces as in the first experiment – but despite their training, the accuracy rate only improved to 59 per cent.

So, the researchers then decided to find out if perceptions of trustworthiness could help people identify artificial images with a third experiment.

‘Faces provide a rich source of information, with exposure of just milliseconds sufficient to make implicit inferences about individual traits such as trustworthiness,’ the authors say. 

A representative set of matched real and synthetic faces (in terms of gender, age, race, and overall appearance)

A representative set of matched real and synthetic faces (in terms of gender, age, race, and overall appearance)

SCIENTISTS DEVELOP AI THAT CAN LEARN FACES YOU FIND ATTRACTIVE 

An AI system has been developed that can delve into your mind and learn which faces and types of visage you find most attractive. 

Finnish researchers wanted to find out whether a computer could identify facial features we find attractive without any verbal or written input guiding it.

The team strapped 30 volunteers to an electroencephalography (EEG) monitor that tracks brain waves, then showed them images of ‘fake’ faces generated from 200,000 real images of celebrities stitched together in different ways. 

They then fed that data into an AI which learnt the preferences from the brain waves and created whole new images tailored to the individual volunteer.  

Read more: Scientists develop AI that can learn faces you find attractive  

The third experiment asked 223 participants to rate the trustworthiness of 128 faces taken the same set of 800 faces on a scale of 1 (very untrustworthy) to 7 (very trustworthy).

The average rating for synthetic faces was 7.7 per cent more trustworthy than the average rating for real faces which is ‘statistically significant’. 

Black faces were rated as more trustworthy than South Asian faces, but otherwise there was no effect across race.

However, women were rated as significantly more trustworthy than men.

The researchers claim that whether or not the faces were smiling – which could have increased perceptions of trustworthiness – didn’t affect results.  

‘A smiling face is more likely to be rated as trustworthy, but 65.5 per cent of the real faces and 58.8 per cent of synthetic faces are smiling, so facial expression alone cannot explain why synthetic faces are rated as more trustworthy,’ they point out. 

Instead, they suggest that synthesized faces may be considered more trustworthy because they resemble average faces, which themselves are deemed more trustworthy.

To protect the public from ‘deepfakes’, the researchers have also proposed guidelines for the creation and distribution of synthesized images.

‘Safeguards could include, for example, incorporating robust watermarks into the image- and video-synthesis networks that would provide a downstream mechanism for reliable identification.

The four most (top) and four least (bottom) trustworthy faces and their trustworthy rating on a scale of 1 (very untrustworthy) to 7 (very trustworthy). Synthetic faces (S) are, on average, more trustworthy than real faces (R)

The four most (top) and four least (bottom) trustworthy faces and their trustworthy rating on a scale of 1 (very untrustworthy) to 7 (very trustworthy). Synthetic faces (S) are, on average, more trustworthy than real faces (R)

‘Because it is the democratization of access to this powerful technology that poses the most significant threat, we also encourage reconsideration of the often-laissez-faire approach to the public and unrestricted releasing of code for anyone to incorporate into any application.

‘At this pivotal moment, and as other scientific and engineering fields have done, we encourage the graphics and vision community to develop guidelines for the creation and distribution of synthetic-media technologies that incorporate ethical guidelines for researchers, publishers, and media distributors.’ 

The study has been published in the journal Proceedings of the National Academy of Sciences.  

SCIENTISTS TRAIN AN AI ROBOT TO CREATE ARTWORKS ‘INDISTINGUISHABLE’ FROM WORKS PAINTED BY HUMANS – BUT CAN YOU TELL THE DIFFERENCE? 

From abstract expressionist masterpieces to perfect portrayals of the real world, artificial intelligence (AI) can create artworks that are indistinguishable from pieces painted by humans, a 2021 study showed. 

In online surveys, around 200 humans were unable to work out the human-made artworks from the artificial art. 

AI art is created by machine learning algorithms that are trained with many thousands of images of real paintings.

The more images of a particular style or aesthetic that the algorithm analyses, the more human-like the results can be, down to fine details like brushstrokes.    

Despite AI paintings already selling for hundreds of thousands of pounds, replicating artistic human emotion appears to be the final frontier for technology. 

However, the study author thinks it may not be long until computers can produce random and unpredictable pieces that move people emotionally. 

The study presents seven paintings – two created by humans and the rest by AI. But can you tell which is which?

See more: AI creates artworks ‘indistinguishable’ from works painted by humans 

 

***
Read more at DailyMail.co.uk