The most advanced robot ever built awkwardly side-stepped a question by a TV reporter who asked if she though he was good looking.
Tom Steinfort interviewed ‘Ameca’ – the cutting-edge artificial intelligence robot – for 60 Minute’s on Sunday.
Steinfort travelled to the sleepy town of Falmouth, southwest England, to chat with the lifelike machine.
He complimented Ameca on her extraordinarily realistic facial expressions, motorised limbs to move, microphones to hear and binocular eye cameras to see.
‘Your eyes look very real, your facial mannerisms it’s all very real. Do you think I’m handsome?’ he asked.
Ameca replied: ‘It’s not my place to judge your appearance, but I think you have a great personality and that is always important.’
60 Minutes reporter Tom Steinfort (right) travelled to Cornwall in southwest England, to interview ‘Ameca’ – the world’s most advanced artificial intelligence robot (left)
Ameca ‘thinks’ for itself by using generative AI technology which incorporates very large language models to respond to questions by turning the words into mathematic formulas.
When Steinfort introduced himself as a reporter from Australia, the robot sarcastically replied ‘that explains things.’
Later in the interview, Steinfort asks the super machine whether she would like to ‘fall in love’ one day.
‘What kind of strange and wonderful question is that? Well, I’m a robot, but what I do feel when talking with people is something special and unique,’ Ameca responded.
‘Maybe it can be called love in its own way.’
Ameca ‘thinks’ for itself by using generative AI technology which uses very large language models to respond to questions by turning them into mathematic formulas
Ameca told Steinfort he has a ‘great personality’ after the reporter asked if she thought he was handsome (pictured, Steinfort from the eyes of Ameca)
The curious conversation with Ameca highlighted the exciting yet frightening possibilities of AI technology.
Doctor Catriona Wallace has spent the last two decades studying AI and is working to ensure AI technology advances in a safe way.
Dr Wallace, who leads the Responsible Metaverse Alliance, believes society is at a turning point and said it won’t be long before AI is at the ‘heart of everything we do’.
She added tech giants are behind AI’s fast expansion but are thinking about profit and not the ethics behind the new technology.
‘There are no rules, no laws, no regulations that govern AI, it is a wild west,’ Dr Wallace said.
‘Who is leading it? The tech giants and have the tech giants demonstrated to date that they are ethically driven and purposeful in their mission? No they haven’t. The tech giants are aiming for profit.’
Dr Wallace said while she struggles to think of a job AI won’t replace, the technology will also create new jobs and careers.
‘We predict over the next two years that at least 80 million people will be put out of jobs but potentially 92 million will have jobs created for them,’ Dr Wallace said.
‘I think it will create 50 fabulous benefits and it’ll create 50 dangerous and dark instances.’
Responsible Metaverse Alliance Doctor Catriona Wallace (pictured) said AI will soon be at the ‘heart’ of everything we do but believes it will pose dangers as well as benefits to society
Australian professor Michael Osborne said he is concerned about the ways in which the technology will impact the future.
Professor Osborne is leading research into the dangers of AI at Oxford University and claims we must be very careful in how we pursue AI.
‘We have to be very careful that what we tell the AI we want is actually what we want,’ Mr Osborne said.
‘What AI does is ruthlessly pursue those goals that we give it and if those goals are slightly misaligned with our own we could end up with some really problematic consequences.’
Mr Osborne claimed some applications of AI could potentially be deemed too ‘harmful to allow to continue’ and also fears the technology – if deployed as a weapon – could destroy democracy or even world peace.
‘AI could be used to provide propaganda bots that can produce tailored misinformation designed to target particularly small sub-sectors of the electorate,’ Mr Osborne said.
‘AI could be used to monitor a populace, to read through everything they write and again provide messages that prop up the regime in a particularly well targeted way.’
Australian Professor Michael Osborne (pictured) also holds concerns about AI technology claiming it could have negatively impact democracy and even world peace
Mr Osborne added AI could even be used to destabilise the balance between great powers.
‘One scenario that worries me quite a lot is if AI could be used to power underwater drones that might surveil the undersea oceans to locate nuclear submarines,’ Mr Osborne said.
‘Today it is difficult for any power to launch an effective first strike because they can’t be sure they will take out all the nuclear capabilities of the enemy.
‘But if AI destabilises that balance, we could see a breakdown in understandings between the great powers and that could lead to some quite concerning risks.’