The secretary-general of the UN has warned ‘killer robots’ could replace human soldiers and called for an international ban on such war machines.
Antonio Guterres said a cyberarms race was already underway and suggested that a huge cyberattack could trigger a global conflict.
At a speech to the United Nations Paris Peace Forum earlier this month, Guterres said that these killer machines could take the place of traditional armies.
He went on to call for a ban on autonomous weapons and tighter regulation on cybercrime.
The warning comes as major military powers are racing to develop weapons that select and fire on targets without meaningful human control.
UN Secretary-General Antonio Guterres has called for an international treaty banning autonomous weapons. He is pictured (above) delivering a speech in Paris earlier this month
An MQ-9 Reaper drone remotely piloted aircraft during a training mission at Creech Air Force Base. The MQ-9 Reaper is set to incorporate AI for making decisions in the battlefield
These largely automatic systems have driven fears the weapons could become uncontrollable and wipe out huge numbers of people.
Guterres has now called for an new international treaty to ban such killer robots, saying that ‘machines that have the power and discretion to kill without human intervention are politically unacceptable and morally despicable’.
He said: ‘Cybercrime thrives in a poorly regulated or unregulated environment.
Arnold Schwarzenegger in the Terminator movies. The ‘killer robots’ currently being developed have less snappy catchphrases
‘Disinformation campaigns, orchestrated very cheaply, can reach the furthest side of the world. Cyberattacks can paralyse entire countries or companies.
‘And a new arms race – the cyberarms race – is already under way. The danger is that the next war will be triggered by a massive cyberattack.
‘Tomorrow, killer robots could take the place of soldiers. We must ban all autonomous weapons.
‘Machines that have the power and discretion to kill without human intervention are politically unacceptable and morally despicable.’
In September China said it would join 28 states, saying it would support prohibiting fully autonomous weapons, but then clarified Beijing was only against their use on the battlefield and not their production and development.
Both the US and Russia have blocked any moves to form legally binding agreements on autonomous weaponry and its MQ-9 Reaper drone is set to incorporate AI for making decisions in the battlefield.
The US military’s new initiative, Project Quarterback, is using AI to make split-second decisions on how to carry out attacks in the field.
Other countries investing heavily in ever increasingly autonomous weapon systems include South Korea and the United Kingdom, with suggestions Turkey and Iran may also be investigating the technology.
So far it is believed militaries have yet to deploy killer robots on the battlefield, but a report by peace organisation, Pax, identified at least 30 global arms firms that do not have policies against developing these kinds of weapons systems.
It is feared these firms, based in China, Russia the US and Israel, are doing so at an alarming rate quicker than governments can introduce regulation.
(Left to right) Liz O’Sullivan with the International Committee for Robot Arms Control (ICRAC), Mary Wareham, global coordinator of the Campaign to Stop Killer Robots, and Jody Williams, at a press conference Campaign to Stop Killer Robots at UN in New York last month
Guterres went on to ask how ‘technological ”progress” could lead to regression in human rights?’, adding that ‘we should instead be ensuring that artificial intelligence is used to guarantee that everyone can live in dignity, peace and prosperity’.
The Portuguese diplomat suggested there is no one country able to unify the world stage on this issue and so ‘multilateral institutions’ need to help increase ‘international solidarity’ as ‘the status quo is untenable’.
Activists against killer robots have pleaded with world leaders to draft regulations for any craft heading into battle, whether by land, sea or land, without human intervention.
Last month Nobel Peace prize winner,Jody Williams, has warned against robots making life-and-death decision on the battlefield, as it is ‘unethical and immoral’.
She also pointed out the difficulty of holding those involved accountable for certain war crimes, as there will be a programmer, manufacturer, commander and the machine itself involved in the act.
Williams won the prestigious accolade in 1997 after leading efforts to ban landmines and is now an advocate with the ‘Campaign To Stop Killer Robots’, which presented to the UN in New York in October.