Scientists revealed a robot dog that can teach itself to walk in just one hour.
In a video released by researchers, the 4-legged robot is at first seen flailing its legs in the air and struggling – but after just 10 minutes it can take steps – and by the one hour mark it’s walking quite easily, rolling off of its back and even navigating being knocked over with a stick by one of the researchers.
Unlike many robots, this one was not shown what to do beforehand in a computer simulation.
Danijar Hafner, an artificial intelligence researcher at the University of California, Berkeley, worked with his colleagues to train the robot using reinforcement learning.
A robotic dog has been trained to walk, roll over and navigate obstacles in about an hour, University of California at Berkeley researchers reveal. Pictured above, the robot at the five minute mark
This type of machine learning is concerned training algorithms by rewarding them for taking certain actions within their environment.
The team used an algorithm called Dreamer that works from past experiences to build a model of the real world and also allows the robot to conduct trial-and-error calculations.
‘The Dreamer algorithm has recently shown great promise for learning from small amounts of interaction by planning within a learned world model,’ the researchers state in their paper, which has not yet been peer-viewed.
‘Learning a world model to predict the outcomes of potential actions enables planning in imagination, reducing the amount of trial and error needed in the real environment.’
Researchers used an algorithm called Dreamer that harnesses past experiences to build a model of the real world for the robot to learn from. Pictured above is the robot at 30 minutes
‘Reinforcement learning will be a cornerstone tool in the future of robot control’ a scientists not affiliated with the study shared. Pictured above is the robot at 40 minutes
After the robot learned to walk, it could also learn to adapt to other less predictable outcomes – such as being poked with a stick by researchers.
‘The problem is your simulator will never be as accurate as the real world. There’ll always be aspects of the world you’re missing,’ Hafner explains to MIT Technology Review.
By the one hour mark, the robotic dog, pictured above, can navigate its environment quite well, roll over and more
Jonathan Hurst, a professor of robotics at Oregon State University who is not directly affiliated with the research, tells the tech publication the team’s findings make it clear that ‘reinforcement learning will be a cornerstone tool in the future of robot control.’
Even with reinforcement learning, the world of teaching robots to act properly in the real world is extremely challenging – as engineers must program whether each action is rewarded or not based on whether it’s desired by scientists.
‘While Dreamer shows promising results, learning on hardware over many hours creates wear on robots that may require human intervention or repair,’ researchers state in the study. Pictured above, the robot navigates an obstacle
‘A roboticist will need to do this for each and every task [or] problem they want the robot to solve,’ Lerrel Pinto, an assistant professor of computer science at New York University, who specializes in robotics and machine learning, explains to MIT Technology Review.
That would amount to a voluminous amount of code and a range of situations that simply can’t be predicted.
The research team cites other obstacles to this type of technology:
‘While Dreamer shows promising results, learning on hardware over many hours creates wear on robots that may require human intervention or repair,’ they state in the study.
‘Additionally, more work is required to explore the limits of Dreamer and our baselines by training for a longer time.
‘Finally, we see tackling more challenging tasks, potentially by combining the benefits of fast real world learning with those of simulators, as an impactful future research direction.’
Hafner says he hopes to teach the robot how to obey spoken commands and connect cameras to the dog to give it vision – all of which would allow it to do more typical canine activities like playing fetch.
In a separate study, researchers at Germany’s Max Planck Institute for Intelligent Systems (MPI-IS) revealed in new research that their robotic dog, dubbed Morti, can learn to walk easily by using a complex algorithm that includes sensors in its feet.
‘As engineers and roboticists, we sought the answer by building a robot that features reflexes just like an animal and learns from mistakes,’ Felix Ruppert, a former doctoral student in the Dynamic Locomotion research group at MPI-IS, says in a statement.
‘If an animal stumbles, is that a mistake? Not if it happens once. But if it stumbles frequently, it gives us a measure of how well the robot walks.’
The robot dog works by using a complex algorithm that guides how it learns.
Information from foot sensors is matched with data from the machine’s model spinal cord which is running as a program inside the robot’s computer.
The robotic dog learns to walk by constantly comparing set and expected sensor information, running reflex loops and adapting the way it regulates its movements.
Scientists at the Max Planck Institute for Intelligent Systems in Germany trained a robotic dog known as Morti to walk using algorithms
***
Read more at DailyMail.co.uk