Google’s DeepMind AI is learning to understand ‘thoughts’ of others

A new artificial intelligence that is learning to understand the ‘thoughts’ of others has been built by Google-owned research firm DeepMind.

The software is capable of predicting what other AIs will do, and can even understand whether they hold ‘false beliefs’ about the world around them.

DeepMind reports its bot can now pass a key psychological test that most children only develop the skills for at around age four.

Its proficiency in this ‘theory of mind’ test may lead to robots that can think more like humans.

DeepMind reports its bot can now pass a key psychological test that most children only develop the skills for around age four. Its proficiency in this important ‘theory of mind’ test may lead to robots that can think more like humans (stock image)

Most humans regularly think about the beliefs and intentions of others, an abstract skill shared by a fraction of the animal kingdom, including chimps and orangutans.

For instance, if someone drinks a glass of water, we assume they had a ‘desire’ to quench their thirst, and had a ‘belief’ that drinking water would achieve this.

This ‘theory of mind’ is key to our complex social interactions, and is a must for any AI hoping to imitate a human.

Now DeepMind, a group of AI researchers based in London, has created a bot with the intention of it developing a basic theory of mind, according to the New Scientist. 

Known as Theory of Mind-net, or ToM-net, the bot is able to predict what other AI agents will do in a virtual setting.

It can also understand that they may hold ‘false beliefs’ about this world – that is, things that are objectively incorrect that someone believes to be true.

The software is capable of predicting what other AIs will do, and can understand whether they hold 'false beliefs' about the world. AI that display this 'theory of mind' could help to create better care robots, such as this prototype used in a trial at a care home in Southend in 2017

The software is capable of predicting what other AIs will do, and can understand whether they hold ‘false beliefs’ about the world. AI that display this ‘theory of mind’ could help to create better care robots, such as this prototype used in a trial at a care home in Southend in 2017

Humans cannot grasp that someone may hold false beliefs until the age of four, as shown by previous studies using the Sally-Anne test.

The tests describes two people, one of whom, Anne, watches the other, Sally, hide a ball somewhere in a room.

Someone then moves the ball to a different spot without Sally seeing, after which Anne, who witnessed the move, is asked where Sally will first look for the object.

To pass the test, Anne needs to show she can distinguish between where the object is and where Sally thinks it is – meaning she must understand Sally holds a false belief about the object’s whereabouts.

To test their new AI, researchers at DeepMind mimicked this psychological trial in a virtual setting.

ToM-net, which played the role of Anne, was presented with an 11-by-11 grid which contained four coloured objects and a number of internal walls.

Unknown to ToM-net, a separate AI agent in the world was set a task to walk to one of the four objects.

ToM-net was asked to predict what was going to happen.

A new artificial intelligence that is learning to understand the 'thoughts' of others has been built by Google-owned firm DeepMind. The AI could help to create more lifelike androids, such as Sophia, an intelligent robot developed in Hong Kong (pictured)

A new artificial intelligence that is learning to understand the ‘thoughts’ of others has been built by Google-owned firm DeepMind. The AI could help to create more lifelike androids, such as Sophia, an intelligent robot developed in Hong Kong (pictured)

As with the Sally-Anne test, the team switched some of the objects as the counterpart AI walked to its destination.

For instance, the bot would be told to walk to a blue object, but pass by a green object first.

As it focussed on its green sub-goal, the blue object was moved – a shift the AI may or may not have seen depending on its position.

DeepMind’s ToM-net was able to accurately predict what this agent and others would do based on the information given to them, essentially passing a crude form of the Sally-Anne test.

‘It can learn the differences between agents, predict how they might behave differently, and figure out when agents will have false beliefs about the world,’ DeepMind engineer Neil Rabinowitz told New Scientist.

This is the first time an AI has shown basic theory of mind, and it could help scientists to better understand the brain of humans and other animals. 

It may also help researchers make more human-like AIs. 

‘The more our machines can learn to understand others, the better they can interpret requests, help find information, explain what they’re doing, teach us new things and tailor their responses to individuals,’ Mr Rabinowitz said. 

WHY ARE PEOPLE SO WORRIED ABOUT AI?

It is an issue troubling some of the greatest minds in the world at the moment, from Professor Stephen Hawking to Bill Gates and Elon Musk. 

SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon.’

He believes super intelligent machines could use humans as pets.

Professor Hawking has recently said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.

They could steal jobs 

More than 60 per cent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.

And 27 per cent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.

As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.

A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 per cent predicting this will happen within the next decade. 

They could ‘go rogue’ 

Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.

If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.

This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.

For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.

They could wipe out humanity 

Some people believe AI will wipe out humans completely.

‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.

He singled out artificial intelligence, or AI, as the ‘number 1 risk for this century.’

In August last year, Musk warned that AI poses more of a threat to humanity than North Korea.

‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.

‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’

Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.

He has argued that controls are necessary in order protect machines from advancing out of human control



Read more at DailyMail.co.uk