Scientists develop AI that can learn which faces you find attractive directly from your brain waves 

An artificial intelligence system has been developed that can delve into your mind and learn which faces and types of visage you find most attractive. 

Finnish researchers wanted to find out whether a computer could identify facial features we find attractive without any verbal or written input guiding it.

The team strapped 30 volunteers to an electroencephalography (EEG) monitor that tracks brain waves, then showed them images of ‘fake’ faces generated from 200,000 real images of celebrities stitched together in different ways.

They didn’t have to do anything – no swiping right on the ones they like – as the team could determine their ‘unconscious preference’ through their EEG readings. 

They then fed that data into an AI which learnt the preferences from the brain waves and created whole new images tailored to the individual volunteer. 

In the future, the results and the technique could be used to determine preferences or get an understanding of unconscious attitudes people may not speak openly about, including race, religion and politics, the team explained.

The team strapped 30 volunteers to an electroencephalography (EEG) monitor that tracks brain waves, then showed them images of ‘fake’ faces generated from 200,000 real images of celebrities stitched together in different ways

Finnish researchers wanted to find out whether a computer could identify facial features we find attractive without any verbal or written input guiding it

Finnish researchers wanted to find out whether a computer could identify facial features we find attractive without any verbal or written input guiding it 

USING BRAIN WAVES TO UNDERSTAND UNCONSCIOUS IDEAS 

Researchers from the University of Helsinki were able to ‘read’ brain waves using electroencephalography (EEG).

They had computers automatically generate fake human faces and then had volunteers look at the images while strapped to an EEG.

The ‘fake’ images were generated from a dataset of 200,000 images of celebrities, with features of each combined to create a new face. 

Feeding the results into a machine learning algorithm, the team then had the computers generate new images.

These new images were based on the preference data the team got by reading the EEG scans – which gave away the volunteers unconscious bias towards one face type or another. 

Experts from the University of Helsinki said their system can now understand our subjective notions of what makes a face attractive.

‘In our previous studies, we designed models that could identify and control simple portrait features, such as hair colour and emotion,’ author Docent Michiel Spapé said, adding that determining attractiveness ‘is a more challenging subject.’

He said in the earlier limited studies of features people largely agrees on someone who is blonde and who smiles, but this was just surface detail. 

‘Attractiveness is a more challenging subject of study, as it is associated with cultural and psychological factors that likely play unconscious roles in our individual preferences,’ explained Spapé. 

‘Indeed, we often find it very hard to explain what it is exactly that makes something, or someone, beautiful: Beauty is in the eye of the beholder.’

Initially, the researchers gave a generative adversarial neural network (GAN) the task of creating hundreds of artificial portraits. 

The images were shown, one at a time, to 30 volunteers who were asked to pay attention to faces they found attractive while their brain responses were recorded via electroencephalography (EEG).

‘It worked a bit like the dating app Tinder: the participants ‘swiped right’ when coming across an attractive face,’ said Spapé. 

Volunteers didn't have to do anything - no swiping right on the ones they like - as the team could determine their 'unconscious preference' through their EEG readings

Volunteers didn’t have to do anything – no swiping right on the ones they like – as the team could determine their ‘unconscious preference’ through their EEG readings

‘Here, however, they did not have to do anything but look at the images. We measured their immediate brain response to the images.’

The process was non-verbal, with the researchers then analysing the EEG data using machine learning techniques and generating a neural network. 

‘A brain-computer interface such as this is able to interpret users’ opinions on the attractiveness of a range of image,’ said project lead Tuukka Ruotsalo.

‘By interpreting their views, the AI model interpreting brain responses and the generative neural network modelling the face images can produce an entirely new face image by combining what a particular person finds attractive,’ he said. 

To test the validity of their modelling, the researchers generated new portraits for each participant, predicting they would find them personally attractive. 

Testing them in a double-blind procedure, they found that the new images matched the preferences of the subjects with an accuracy of over 80 per cent.

They then fed that data into an AI which learnt the preferences from the brain waves and created whole new images tailored to the individual volunteer

They then fed that data into an AI which learnt the preferences from the brain waves and created whole new images tailored to the individual volunteer

They trained an AI to interpret brain waves and combined the resulting 'brain-computer interface' with a model of artificial faces - allowing the computer to literally create fake human likenesses that matched the 'desires' of the subject

They trained an AI to interpret brain waves and combined the resulting ‘brain-computer interface’ with a model of artificial faces – allowing the computer to literally create fake human likenesses that matched the ‘desires’ of the subject

HOW DO GENERATIVE ADVERSARIAL NETWORKS WORK?

Generative Adversarial Network work by pitting two algorithms against each other, in an attempt to create convincing representations of the real world.

These ‘imagined’ digital creations – which can take the form of images, videos, sounds and other content – are based on data fed to the system.

One AI bot creates new content based upon what it has been taught, while a second critiques these creations – pointing out imperfections and inaccuracies.

And the process could one day allow robots to learn new information without any input from people. 

‘The study demonstrates that we are capable of generating images that match personal preference by connecting an artificial neural network to brain responses.

‘Succeeding in assessing attractiveness is especially significant, as this is such a poignant, psychological property of the stimuli,’ Spapé explained. 

‘Computer vision has thus far been very successful at categorising images based on objective patterns,’ he added. 

‘But by bringing in brain responses to the mix, we show it is possible to detect and generate images based on psychological properties, like personal taste.’

The new technique has the potential for exposing unconscious attitudes to a range of subjects that people may not be able to voice consciously.

Ultimately, the study may benefit society by advancing the capacity for computers to learn and increasingly understand subjective preferences, through interaction between AI solutions and brain-computer interfaces, the team predicted.

‘If this is possible in something that is as personal and subjective as attractiveness, we may also be able to look into other cognitive functions such as perception and decision-making,’ said Spapé. 

‘Potentially, we might gear the device towards identifying stereotypes or implicit bias and better understand individual differences.’ 

The findings have been published in the journal IEEE Transactions on Affective Computing. 

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

Read more at DailyMail.co.uk