News, Culture & Society

Ex-Google engineer says “killer robots” could carryout atrocities and unlawful killings

A former Google engineer has expressed fears about a new generation of robots that could carryout ‘atrocities and unlawful killings’.

Laura Nolan, who previously worked on the tech giant’s military drone project Maven, is calling for the ban of all autonomous war drones, as these machines do not have the same common sense or discernment as humans. 

Maven focused on enhancing drones with artificial intelligence (AI) to distinguish between enemy targets and people and other objects – but was discontinued after employees protested the technologies development calling it ‘evil’.


Former Google engineer has expressed fears about a new generation of robots that could carryout ‘atrocities and unlawful killings’. Pictured is the war drone MQ-9 Reaper

Nolan, who left Google in 2018 in protest against the US military drone technology, is calling for all drones not operated by humans to fall under the same ban as chemical weapons, according to The Guardian.

‘What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed,’ Nolan, who now works with the Campaign to Stop Killer Robots, explained to The Guardian.

‘There could be large-scale accidents because these things will start to behave in unexpected ways.’

‘Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.’

Laura Nolan, who worked on Google's military drone project Maven, is calling for the ban of all autonomous war drones, as they don't have the common sense or discernment

Laura Nolan, who worked on Google’s military drone project Maven, is calling for the ban of all autonomous war drones, as they don’t have the common sense or discernment

The former Google employee also noted that other factors such as radar signals or unusual weather patterns could wreak havoc on the weapon’s system as it ‘doesn’t have the discernment or common sense that the human touch has’, said Nolan.

In addition to being unpredictable, the killer robots can only be tested in the battlefield.

‘The other scary thing about these autonomous war systems is that you can only really test them by deploying them in a real combat zone,’ said Nolan.


The U.S. military has been looking to incorporate elements of artificial intelligence and machine learning into its drone program.

Project Maven, as the effort is known, aims to provide some relief to military analysts who are part of the war against Islamic State.

These analysts currently spend long hours staring at big screens reviewing video feeds from drones as part of the hunt for insurgents in places like Iraq and Afghanistan.

The Pentagon is trying to develop algorithms that would sort through the material and alert analysts to important finds, according to Air Force Lieutenant General John N.T. ‘Jack’ Shanahan, director for defense intelligence for warfighting support.

‘A lot of times these things are flying around(and)… there’s nothing in the scene that’s of interest,’ he told Reuters.

Shanahan said his team is currently trying to teach the system to recognize objects such as trucks and buildings, identify people and, eventually, detect changes in patterns of daily life that could signal significant developments.

‘We’ll start small, show some wins,’ he said.

A Pentagon official said the U.S. government is requesting to spend around $30 million on the effort in 2018.

Similar image recognition technology is being developed commercially by firms in Silicon Valley, which could be adapted by adversaries for military reasons.

Shanahan said he’ not surprised that Chinese firms are making investments there.

‘They know what they’re targeting,’ he said.

Research firm CB Insights says it has tracked 29 investors from mainland China investing in U.S. artificial intelligence companies since the start of 2012.

The risks extend beyond technology transfer.

‘When the Chinese make an investment in an early stage company developing advanced technology, there is an opportunity cost to the U.S. since that company is potentially off-limits for purposes of working with (the Department of Defense),’ the report said.

‘Maybe that’s happening with the Russians at present in Syria, who knows?’

‘What we do know is that at the UN Russia has opposed any treaty let alone ban on these weapons by the way.’

Project Maven involved training an algorithm to identify certain objects in video from surveillance drones — objects such as cars, or people, for example.

The relatively small project was viewed as a foot in the door for winning the much more massive Joint Enterprise Defense Infrastructure Cloud, or JEDI, contract, a $10 billion, 10-year deal that is currently held by Amazon, but is up for renewal.

In September 2017, Google won the Maven contract, but decided to keep the deal secret, even from its own employees.

As word of Maven leaked internally, employees began voicing outrage, citing the company’s former slogan, ‘don’t be evil.’

according to Wired.

On May 30, 2018 the New York Times published a story about Maven that included the emails Fei-Fei Li, a Stanford professor and Google Cloud’s chief scientist for AI, had sent to other executives about weaponized AI.

‘Avoid at ALL COSTS any mention or implication of AI,’ wrote Li. ‘Weaponized AI is probably one of the most sensitized topics of AI—if not THE most. This is red meat to the media to find all ways to damage Google.’

Two days after the leaked emails were published, Greene announced that Google planned not to renew the Maven contract, citing intense backlash, according to Gizmodo.




Comments are closed.