Chinese and Russian AI with access to NUKES could start WW3 and spark Armageddon: US fears computer miscalculation could see missiles launched at America

Russia and China must ensure only humans, and never artificial intelligence, are given control of nuclear weapons to avoid a potential doomsday scenario, a senior US official has declared.

Washington, London and Paris have all agreed to maintain total human control over nuclear weapons, State Department arms control official Paul Dean said, as a failsafe to prevent any technological glitches from plunging humanity into a devastating conflict.

Dean, principal deputy assistant secretary in the Bureau of Arms Control, Deterrence and Stability, yesterday urged Moscow and Beijing to follow suit.

‘We think it is an extremely important norm of responsible behaviour and we think it is something that would be very welcome in a P5 context,’ he said, referring to the five permanent members of the United Nations Security Council.

It comes as regulators warned that AI is facing its ‘Oppenheimer moment’ and are calling on governments to develop legislation restricting its application to military technology before it is too late.

The alarming statement, referencing J. Robert Oppenheimer who helped invent the atomic bomb in 1945 before advocating for controls over the spread of nuclear arms, was made at a conference in Vienna on Monday, where civilian, military and technology officials from more than 100 countries met to discuss the prospect of militarised AI systems. 

Hwasong-18 intercontinental ballistic missile is launched from an undisclosed location in North Korea

Washington, London and Paris have all agreed to maintain total human control over nuclear weapons, State Department arms control official Paul Dean said, urging Russia and China to follow suit (Sarmat intercontinental ballistic missile launch pictured)

Washington, London and Paris have all agreed to maintain total human control over nuclear weapons, State Department arms control official Paul Dean said, urging Russia and China to follow suit (Sarmat intercontinental ballistic missile launch pictured)

A Minuteman III intercontinental ballistic missile is pictured in a silo in an undisclosed location in the US

A Minuteman III intercontinental ballistic missile is pictured in a silo in an undisclosed location in the US

Though the integration of AI into military hardware is increasing at a rapid clip, the technology is still very much in its nascent stages.

But as of yet, there is no international treaty that exists to ban or limit the development of lethal autonomous weapons systems (LAWS).

‘This is the Oppenheimer Moment of our generation,’ said Austrian Foreign Minister Alexander Schallenberg. ‘Now is the time to agree on international rules and norms.’

During his opening remarks at the Vienna Conference on Autonomous Weapons Systems, Schallenberg described AI as the most significant advancement in warfare since the invention of gunpowder over a millennia ago.

The only difference was that AI is even more dangerous, he continued.

‘At least let us make sure that the most profound and far-reaching decision — who lives and who dies — remains in the hands of humans and not of machines,’ Schallenberg said.

The Austrian Minister argued that the world needs to ‘ensure human control,’ with the troubling trend of military AI software replacing human beings in the decision-making process.

‘The world is approaching a tipping point for acting on concerns over autonomous weapons systems, and support for negotiations is reaching unprecedented levels,’ said Steve Goose, arms campaigns director at Human Rights Watch. 

‘The adoption of a strong international treaty on autonomous weapons systems could not be more necessary or urgent.’

There are already examples of AI being used in a military context to lethal effect.

Earlier this year, a report from +972 magazine cited six Israeli intelligence officers who admitted to using an AI called ‘Lavender’ to classify as many as 37,000 Palestinians as suspected militants — marking these people and their homes as acceptable targets for air strikes.

Lavender was trained on data from Israeli intelligence’s decades-long surveillance of Palestinian populations, using the digital footprints of known militants as a model for what signal to look for in the noise, according to the report.

Meanwhile, Ukraine is developing AI-enabled drones that could lock on to Russian targets from further away and be more resilient to electronic countermeasures in efforts to ramp up its military capabilities as war rages on.

Deputy Defence Minister Kateryna Chernohorenko said Kyiv is developing a new system that could autonomously discern, hunt and strike its targets from afar.

This would make the drones harder to shoot down or jam, she said, and would reduce the threat of retaliatory strikes to drone pilots.

As of yet, there is no international treaty that exists to ban or limit the development of lethal autonomous weapons systems (LAWS)

As of yet, there is no international treaty that exists to ban or limit the development of lethal autonomous weapons systems (LAWS) 

Civilian, military and technology leaders from over 100 countries convened Monday in Vienna to discuss regulatory and legislative approaches to autonomous weapons systems and military AI

Civilian, military and technology leaders from over 100 countries convened Monday in Vienna to discuss regulatory and legislative approaches to autonomous weapons systems and military AI

A pilot practices with a drone on a training ground in Kyiv region on February 29, 2024, amid the Russian invasion of Ukraine

A pilot practices with a drone on a training ground in Kyiv region on February 29, 2024, amid the Russian invasion of Ukraine

‘Our drones should be more effective and should be guided towards the target without any operators.

‘It should be based on visual navigation. We also call it ”last-mile targeting”, homing in according to the image,’ she told The Telegraph. 

Monday’s conference on LAWS in Vienna came as the Biden administration tries to deepen separate discussions with China over both nuclear weapons policy and the growth of artificial intelligence.

The spread of AI technology surfaced during sweeping talks between US Secretary of State Antony Blinken and China’s Foreign Minister Wang Yi in Beijing on April 26.

The two sides agreed to hold their first bilateral talks on artificial intelligence in the coming weeks, Blinken said, adding that they would share views on how best to manage risks and safety surrounding the technology.

As part of normalising military communications, US and Chinese officials resumed nuclear weapons discussions in January, but formal arms control negotiations are not expected any time soon.

China, which is expanding its nuclear weapons capabilities, urged in February that the largest nuclear powers should first negotiate a no-first-use treaty between each other.

***
Read more at DailyMail.co.uk