A robot dog has learned how to automatically recover after being assaulted by a human antagonist.
The robot, named Jueying, is a quadruped – or a four-legged creature – that uses pre-learned skills to quickly respond and adapt to ‘unseen situations,’ such as being pushed down or knocked over with a stick.
The project began by training software that guided a virtual version of the robot dog and then expert skills were used in combination to perform complex behaviors – all of which were then uploaded to Jueying.
A video shows the four-legged machine being pulled down, kicked and pushed over, but the AI-powered robot quickly rolls over and stands upright with no human intervention.
A robot dog has learned how to automatically recover after being assaulted by a human antagonist. A video shows the four-legged machine being pulled down, kicked and pushed over, but the AI-powered robot quickly rolls over and stands upright with no human intervention
Scientists are developing truly autonomous robots, but such innovations need to be resilient in the face of failure and continue to carry out a mission no matter what hurdles it may come across.
And this is what the makers of Jueying are working to achieve.
Jueying was developed in collaboration with researchers from Zhejian University and the University of Edinburgh, which focused on a multi-expert learning architecture, or MELA.
MELA contains a group specialized deep neural networks (DNN) that act as players together with a gating network, which is like the coach – Dr. Alex Li with the University of Edinburg said the process is ‘similar to a soccer team.’
There are eight expert networks in total: standing balance, large stride trot, left turning, posture control, back righting, small stride trot, lateral rolling and right turning. These ‘players’ are taught to work together and once this is achieved, they are combined into an overarching network that acts like the ‘coach’
There are eight expert networks in total: standing balance, large stride trot, left turning, posture control, back righting, small stride trot, lateral rolling and right turning.
These ‘players’ are taught to work together and once this is achieved, they are combined into an overarching network that acts like the ‘coach.’
Li told Wired: ‘The coach or the captain will tell who is doing what, or who should do work together, at which time,’ said Li.
‘So all experts can collaborate together as a whole team, and this drastically improves the capability of skills.’
Wired describes an example of Jueying falling over and needing to recover.
The system is capable of identifying that movement and will prompt the expert involved with balance.
WiAn example of how it works is Jueying has fallen over and needs to recover. The system is capable of identifying that movement and will prompt the expert involved with balance
Jueying’s software is trained with each expert individually and the gaiting network is trained with the group as a whole, which learns to combine and active them ‘on the fly.’ ‘Meanwhile, all experts are also diversified with unique skills,’ the researchers share. ‘Through co-training, MELA learns adaptive skills across various locomotion modes, such as turning and righting to trotting.’
‘This is new milestone in robotics and AI, as robots are able to deal with new problems they have not experienced before,’ Li said.
Jueying’s software is trained with each expert individually and the gaiting network is trained with the group as a whole, which learns to combine and active them ‘on the fly.’
‘Meanwhile, all experts are also diversified with unique skills,’ the researchers share.
‘Through co-training, MELA learns adaptive skills across various locomotion modes, such as turning and righting to trotting.’
The notion is that robots learn to move similar to how human toddlers first start walking, which is one foot in front of the other, but in the case of Jueying it is one step and then another – along with a lot of trial and error.