Activists warn UN about dangers of using AI to make decisions on what human soldiers should target and destroy on the battlefield
- Activists warn UN about killer robots in the military at panel discussion at UN
- Said it is unethical, immoral and a decision that cannot be undone
- Also noted it is difficult pointing the blame when war crimes occur
- There is the machine, programmer, commander and maker involved in process
A Nobel Peace prize winner has warned against robots making life-and-death decision on the battlefield, as it is ‘unethical and immoral’ and can never be undone.
Jody Williams made the statement at the United Nations in New York City after the US military announced its project the uses AI to make decisions on what human soldiers should target and destroy.
Williams also pointed out the difficulty of holding those involved accountable for certain war crimes, as there will be a programmer, manufacturer, commander and the machine itself involved in the act.
s
Jody Williams (right) has warned against robots making life-and-death decision on the battlefield, as it is ‘unethical and immoral’ and ‘can never be undone’. She was accompanied with fellow activists Liz O’Sullivan (left) and Mary Wareham (center)
Williams won the prestigious accolade in 1997 after leading efforts to ban landmines and is now an advocate with the ‘Campaign To Stop Killer Robots’.
‘Drones started out, you know, as surveillance equipment, and then suddenly they stuck on some Hellfire missiles, and they were, you know, killer,’ she said during a panel discussion at the United Nations in New York City yesterday.
‘We’re hoping, and really expecting that the larger community would not find out about the research and development of killer robots.’
‘We need to step back and think about how artificial intelligence robotic weapons systems would affect this planet and the people living on it.’
The activists against killer robots have also pleaded with officials to draft regulations for any craft heading into battle, whether by land, sea or land, without human intervention. The MQ-9 Reaper is set to incorporate AI for making decisions in the battlefield
Williams is referring to the US military’s new initiative, Project Quarterback, which is using AI to make split-second decisions on how to carry out attacks in the field.
The group was formed in October 2012 with the goal to ban fully autonomous weapons and thereby retain meaningful human control over the use of force.
The activists against killer robots have also pleaded with officials to draft regulations for any craft heading into battle, whether by land, sea or land, without human intervention.
Liz O’Sullivan, of the International Committee for Robot Arms Control, said, ‘if we allow autonomous weapons to deploy and selectively engage with their own targets, we will see disproportionate false fatalities and error rates with people of color, people with disabilities, anybody who has been excluded from the training sets by virtue of the builders own inherent bias.’
Mary Wareham, another activist, pointed out that during meetings at the UN in Geneva in August, ‘Russia and the United States were the key problems’ as they ‘did not want to see any result’ towards the drafting of a ban treaty.
She said, ‘other countries that are investing heavily into ever increasingly autonomous weapon systems include China, South Korea, Israel, the United Kingdom to some extent; perhaps Turkey, perhaps Iran.’
Another dangerous factor that comes into play with killer robots is who or what will be held accountable for war crimes?
‘It’s unclear who, if anyone, could be held responsible for unlawful acts caused by a fully autonomous weapon: the programmer, manufacturer, commander, [or the] machine itself,’ said Williams.
‘This accountability gap would make it is difficult to ensure justice, especially for victims.’