Former Google chief Eric Schmidt believes AI technology is developing so quickly it may soon turn against its human masters.
The billionaire tech leader claims Terminator-like AI movie death scenarios are ‘one to two decades away’ but says we should only ‘worry about them in a while’.
Many AI experts, including Elon Musk, have said we should be wary of a potential AI uprising.
Schmidt himself has previously expressed concern about what countries such as Russia and China will do with AI weapons.
Terminator-style AI takeovers could be happening within ‘one to two decades’, according to ex-Google CEO Eric Schmidt. Pictured is Arnold Schwarzenegger playing Terminator, a cyborg assassin disguised as a human
In the 1984 movie Terminator a cyborg assassin disguises himself as a human.
And this type of terrifying scenario might not be far off, according to the now MIT fellow Schmidt, speaking at the Munich Security Conference earlier this month.
‘Everyone immediately then wants to talk about all the movie-inspired death scenarios, and I can confidently predict to you that they are one to two decades away.
‘So let’s worry about them, but let’s worry about them in a while,’ Schmidt said, DefenseNews reported.
For the ex-head of Google, the benefits and uses of AI far outweigh any of the negatives.
An apocalyptic robot takeover, however unlikely, is a risk worth taking for the medical and technological advances it provides, Schmidt believes.
In response to further questions about humans losing control over cyborgs, he responded: ‘You’ve been watching too many movies.
‘Let me be clear: Humans will remain in charge of [AI] for the rest of time,’ he said.
Schmidt believes that no matter how advanced AI becomes, it will never be perfect and it will always have inherent flaws.
The former Google chief Eric Schmidt (pictured) has revealed he is ‘very concerned’ Russia and China could use AI to get world domination
‘These technologies [AI] have serious errors in them, and they should not be used with life-critical decisions.
‘So I would not want to be in an airplane where the computer was making all the general intelligence decisions about flying it.
‘The technology is just not reliable enough ― there too many errors in its use. It is advisory, it makes you smarter and so forth, but I wouldn’t put it in charge of command and control,’ he said at the conference.
Whilst he believes humans will always control the technology, he admits its use in weapons raises some concerns.
Speaking about the development of AI-guided weapon systems around the world, he warned against ignoring what countries like China and Russia were developing.
‘It’s a national program. As I understand, what that means in China is that there will be hundreds of thousands of engineers produced and trained in this.
‘There is no analogous United States or European doctrine, and we need to have one,’ Schmidt noted.
These comments from Schmidt build on previous statements where he revealed he is ‘very concerned’ that Russia and China are leading the race on artificial intelligence.
Schmidt flagged the risk of their commercial as well as military aspirations, saying their lead in AI could help them conquer the world.
It follows his warning from last year that China will overtake the US in AI by 2025.
Speaking at BBC’s Tomorrow’s World Live at London’s Science Museum with Professor Brian Cox, Schmidt, 62, admitted he worries about what rival countries could do with their technology.
‘I’m very concerned about this’, he said in response to a question from a member of the audience about the AI race between China and Russia.
‘I think that both the Russian and the Chinese leaders have recognised the value of this, not just for their commercial aspirations, but also their military aspirations’, he told the audience, writes Daily Star.
‘I’m very concerned about this’, Schmidt said in response to a question from a member of the audience about the AI race between China and Russia (stock image)
‘It is very, very important that the incredible engines that exist in Europe, and Britain, wherever, United States etc, get more funding for basic research, ethics and so forth’, he said.
Schmidt said he would like the US and Europe to deal with Russian competition not by copying their approach but by ‘being more like us’.
‘Let’s outrun them with our own intelligence, rather than any other outcome’, he said.
Last year Schmidt slammed Trump’s government for falling behind the Chinese government when it came to AI.
‘I’m assuming our [US] lead will continue over the next five years and then that China will catch up extremely quickly,’ he told the Center for New American Security’s Paul Scharre at the Artificial Intelligence & Global Security Summit on Wednesday, according to Defense One.
‘We need to get our act together, as a country…This is the moment when the [US] government collectively, and private industry, needs to say, ‘these technologies are important.’
In July last year, China unveiled its national plan for the future of artificial intelligence.
The ex-Alphabet boss has previously warned the Chinese are poised to erase the American advantage and that the Trump administration is key in helping them
‘By 2020, they will have caught up. By 2025, they will be better than us. By 2030, they will dominate the industries,’ Schmidt said.
Donald Trump’s 2018 budget request slashes funds for basic science and research by $4.3 billion (£3 billion), roughly 13 per cent compared to 2016.
‘It feels, as an American, that we are fighting this conflict with one hand behind our back.
Earlier in the year Schmidt also revealed he is an ‘AI denier’.
‘I’ve taken the position of ‘job elimination denier,’ he told an audience at MIT according to CNBC.
‘I’ve just decided I’m going to be contrarian, because the data supports me, and it’s more fun to be in opposition anyway,’ he said.
Still, ‘there’s no question that there’s job dislocation. But there [are] always new solutions,’ he said.
‘The economic folks would say that you can see the job that’s lost, but you very seldom can see the job that’s created.’
Artificial Intelligence has been described as a threat that could be ‘more dangerous than nukes’.
One group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.
The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.
The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.
The authors say there is a ‘broad consensus’ that AI research is making good progress and would have a growing impact on society.