Former Google chief Eric Schmidt has revealed he is ‘very concerned’ that Russia and China are leading the race on artificial intelligence.
Schmidt flagged the risk of their commercial as well as military aspirations, saying their lead in AI could help them conquer the world.
It follows his warning last year that China will overtake the US in AI by 2025.
The former Google chief Eric Schmidt (pictured) has revealed he ‘very concerned’ Russia and China could use AI to get world domination
Speaking at BBC’s Tomorrow’s World Live at London’s Science Museum with Professor Brian Cox, Schmidt, 62, admitted he worries about what rival countries could do with their technology.
‘I’m very concerned about this’, he said in response to a question from a member of the audience about the AI race between China and Russia.
‘I think that both the Russian and the Chinese leaders have recognised the value of this, not just for their commercial aspirations, but also their military aspirations’, he told the audience, writes Daily Star.
‘It is very, very important that the incredible engines that exist in Europe, and Britain, wherever, United States etc, get more funding for basic research, ethics and so forth’, he said.
Schmidt said he would like the US and Europe to deal with Russian competition not by copying their approach but by ‘being more like us’.
‘Let’s outrun them with our own intelligence, rather than any other outcome’, he said.
‘I’m very concerned about this’, Schmidt said in response to a question from a member of the audience about the AI race between China and Russia (stock image)
Last year Schmidt slammed Trump’s government for falling behind the Chinese government when it came to AI.
‘I’m assuming our [US] lead will continue over the next five years and then that China will catch up extremely quickly,’he told the Center for New American Security’s Paul Scharre at the Artificial Intelligence & Global Security Summit on Wednesday, according to Defense One.
‘We need to get our act together, as a country…This is the moment when the [US] government collectively, and private industry, needs to say, ‘these technologies are important.’
WHY ARE PEOPLE SO WORRIED ABOUT AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Professor Stephen Hawking to Bill Gates and Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon.’
He believes super intelligent machines could use humans as pets.
Professor Hawking has recently said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 per cent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 per cent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 per cent predicting this will happen within the next decade.
They could ‘go rogue’
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the ‘number 1 risk for this century.’
In August last year, Musk warned that AI poses more of a threat to humanity than North Korea.
‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.
‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control
In July last year, China unveiled its national plan for the future of artificial intelligence.
‘By 2020, they will have caught up. By 2025, they will be better than us. By 2030, they will dominate the industries,’ Schmidt said.
Trump’s 2018 budget request slashes funds for basic science and research by $4.3 billion (£3 billion), roughly 13 per cent compared to 2016.
‘It feels, as an American, that we are fighting this conflict with one hand behind our back.
The ex-Alphabet boss has previously warned the Chinese are poised to erase the American advantage and that the Trump administration is key in helping them
Earlier in the year Schmidt also revealed he is an ‘AI denier’.
‘I’ve taken the position of ‘job elimination denier,’ he told an audience at MIT according to CNBC.
‘I’ve just decided I’m going to be contrarian, because the data supports me, and it’s more fun to be in opposition anyway,’ he said.
Still, ‘there’s no question that there’s job dislocation. But there [are] always new solutions,’ he said.
‘The economic folks would say that you can see the job that’s lost, but you very seldom can see the job that’s created.’
Artificial Intelligence has been described as a threat that could be ‘more dangerous than nukes’.
One group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking, have signed an open letter promising to ensure AI research benefits humanity.
The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future.
The document, drafted by the Future of Life Institute, said scientists should seek to head off risks that could wipe out mankind.
The authors say there is a ‘broad consensus’ that AI research is making good progress and would have a growing impact on society.