AI has moved a step closer to achieving human-like thought, after a new project developed machines capable of abstract thought to pass parts of an IQ test.
Experts from DeepMind, which is owned by Google parent company Alphabet, put machine learning systems through their paces with IQ tests, which are designed to measure a number of reasoning skills.
The puzzles in the test involve a series of seemingly random shapes, which participants need to study to determine the rules of that dictate the pattern.
Once they have worked out the rules of the puzzle, they should be able to accurately pick the next shape in the sequence.
DeepMind researchers hope that developing AI which is capable of thinking outside the box could lead to machines dreaming-up novel solutions to problems that humans may not ever have considered.
A specially-designed software system built for the task was able to achieve a test score of 63 per cent on the IQ-style puzzles.
A new project is hoping to prove that AI systems are capable of abstract thought similar to humans. Experts are putting machine learning systems through their paces using IQ tests, which measure a number of reasoning skills
Researchers at Google’s DeepMind project in London used puzzles known as ‘Raven’s Progressive Matrices’.
Developed by John C Raven in 1936, the Matrices measure participants’ ability to make sense and meaning out of complex or confusing data.
They also test their ability to perceive new patterns and relationships, and to forge largely non-verbal constructs that make it easy to handle complexity.
‘Abstract reasoning is important in domains such as scientific discovery where we need to generate novel hypotheses and then use these hypotheses to solve problems,’ David Barrett at DeepMind told New Scientist.
‘It is important to note that the goal of this work is not to develop a neural network that can pass an IQ test.’
Researchers at Google’s DeepMind project in London used puzzles are known as Raven’s progressive matrices (pictured). Developed by John C Raven in 1936, the Matrices measure the participant’s ability to make sense and meaning out of complex or confusing data.
WHAT IS GOOGLE’S DEEPMIND AI PROJECT?
DeepMind was founded in London in 2010 and was acquired by Google in 2014.
It now has additional research centres in Edmonton and Montreal, Canada, and a DeepMind Applied team in Mountain View, California.
DeepMind is on a mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.
If successful, the firm believes this will be one of the most important and widely beneficial scientific advances ever made.
The company has hit the headlines for a number of its creations, including software it created a that taught itself how to play and win at 49 completely different Atari titles, with just raw pixels as input.
In a world first, its AlphaGo program took on the world’s best player at G, one of the most complex and intuitive games ever devised, with more positions than there are atoms in the universe – and won.
Human candidates sitting the tests can give themselves a boost by heavy preparation, learning the type of rules used to govern the patterns used in the matrices.
That means, rather than using abstract thought, they are using knowledge they have learned instead.
This is a particular problem with AI, which use neural networks fed with vast amounts of data to learn, and could easily just be taught to pick up on these patterns without needing to employ abstract thinking.
Instead, the researchers tested a range of standard neural networks on a single property within a matrix but not all of the possible properties. They found they performed extremely poorly, as low as as 22 per cent.
However, a specially designed neural network that could infer relationships between different parts of the puzzle scored the highest mark of 63 per cent.
Due to the design of the tests, it was not possible to compare these scores directly with people, as the AI systems has prior training on how to approach them.
Researchers found participants with a lot of experience with the tests, which would be comparable to the trained machines, could score more than 80 per cent. Newcomers to the tests would often fail to answer all the questions.
The full findings are awaiting peer-review but can be viewed on the pre-print repository Arxiv.
In recent weeks, it was revealed Google’s artificial Intelligence can now dream an entire world based on a single photo.
The intelligent system, developed as part of the DeepMind program, has taught itself to visualise any angle on a space in a static photograph.
Dubbed Generative Query Network, it gives the machine a ‘human-like imagination’.
This allows the algorithm to generate three-dimensional impressions of spaces it has only ever seen in flat, two-dimensional images.
The AI breakthrough was announced by DeepMind CEO Demis Hassabis.
With Generative Query Network, Dr Hassabis and his team tried to replicate the way a living brain learns about its environment simply by looking around.
This is a very different approach to most projects, in which researchers manually label data and slowly feed it to the AI.
To train the DeepMind neural network, the team showed the AI a carousel of static images taken from different viewpoints on the same scene.
By using these images, the algorithm was able to teach itself to predict how something would appear in a new viewpoint not included in the images.
DeepMind soon learnt to imagine complete three-dimensional representations of the scene.
The intelligent machine is even able to move around its imagined space.
As it moves, the algorithm must constantly make predictions about where the objects initially seen in the photos should be, and what they look like from its ever-changing perspective.
Artificial Intelligence can now dream an entire world based on a single photo. The intelligent system, developed as part of Google’s DeepMind AI program, has taught itself to visualise any angle on a space in a static photograph
HOW DOES ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.
ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.
Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images
Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.
The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge.
A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other.
This approach is designed to speed up the process of learning, as well as refining the output created by AI systems.
‘It was not at all clear that a neural network could ever learn to create images in such a precise and controlled manner,’ said lead author of the paper, Ali Eslami.
‘However we found that sufficiently deep networks can learn about perspective, occlusion and lighting, without any human engineering. This was a super surprising finding.’
To generate these complete scenes, Generative Query Network uses two components.
The first handles representation and encodes the three-dimensional scene in the static image into a complex mathematical form, known as a vector.
To train the DeepMind neural network, the team showed the AI a carousel of static images taken from different viewpoints on the same scene. The intelligent machine is even able to move around its imagined space
Dubbed Generative Query Network, the algorithm gives the machine a ‘human-like imagination’. This allows the algorithm to generate three-dimensional impressions of spaces it has only ever seen in flat, two-dimensional images (pictured)
The second part, the ‘generative’ component, uses these vectors to imagine what a different viewpoint in that scene – not included in the original images – would be able to see.
Using the data gathered from the initial photos, DeepMind is able to ascertain spatial relationships within the scene.
Ali Eslami explained: ‘Imagine you’re looking at Mt. Everest, and you move a metre – the mountain doesn’t change size, which tells you something about its distance from you.
‘But if you look at a mug, it would change position. That’s similar to how this works.’
Google’s cutting-edge AI can also control objects within this imagined virtual space by applying its understanding of spatial relationships to a scenario.
Schematic illustration of the Generative Query Network. A) The agent observes training scene from different viewpoints. B) The representation network, f, observations made from the viewpoints and inputs the end result in to the generation network, g, which creates the predicted views which allows for the 360° moving reconstruction
WHY ARE PEOPLE SO WORRIED ABOUT AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.
He believes super intelligent machines could use humans as pets.
Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade.
They could ‘go rogue’
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.
Musk warned that AI poses more of a threat to humanity than North Korea.
‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.
‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control