Google bans the uses of its AI tech in weapons

Google will not allow its artificial intelligence software to be used in weapons or ‘unreasonable surveillance’ efforts.

Following a major backlash from employees, the Alphabet unit has laid out new rules for its AI software. 

The new restrictions could help Google management defuse months of protest by thousands of employees against the company’s work with the U.S. military to identify objects in drone video.

 

Google will not allow its artificial intelligence software to be used in weapons or ‘unreasonable surveillance’ efforts, Chief Executive Sundar Pichai, pictured, said today.

GOOGLE’S SEVEN RULES OF AI 

Google says for its AI to be used, projects must: 

1. Be socially beneficial. 

2. Avoid creating or reinforcing unfair bias. 

3. Be built and tested for safety. 

4. Be accountable to people. 

5. Incorporate privacy design principles. 

6. Uphold high standards of scientific excellence. 

7. Be made available for uses that accord with these principles. 

 

Google will pursue other government contracts including around cybersecurity, military recruitment and search and rescue, Chief Executive Sundar Pichai said in a blog post Thursday.

‘We recognize that such powerful technology raises equally powerful questions about its use. 

‘How AI is developed and used will have a significant impact on society for many years to come. 

‘As a leader in AI, we feel a deep responsibility to get this right.’

However, he said the firm will still work with the military. 

‘We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,’ he said.

‘These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.

‘These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.’

The move comes after Google said it is calling off its controversial 'Project Maven' program with the Pentagon. The contract is set to expire in 2019 and Google Cloud CEO Diane Greene said they won't renew it past then

The move comes after Google said it is calling off its controversial ‘Project Maven’ program with the Pentagon. The contract is set to expire in 2019 and Google Cloud CEO Diane Greene said they won’t renew it past then

Breakthroughs in the cost and performance of advanced computers have begun to carry AI from research labs into industries such as defense and health. 

Google and its big technology rivals have become leading sellers of AI tools, which enable computers to review large datasets to make predictions and identify patterns and anomalies faster than humans could.

But the potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents through mass collection of online communications has sparked concerns among academic ethicists and Google employees.

GOOGLE’S AI BAN: WHERE WILL ITS SOFTWARE NOT BE USED? 

Google says it will not let its AI be used for: 

  • Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

 

‘Taking a clear and consistent stand against the weaponization of its technologies’ would help Google demonstrate ‘its commitment to safeguarding the trust of its international base of customers and users,’ Lucy Suchman, a sociology professor at Lancaster University in England, told Reuters ahead of Thursday’s announcement.

Breakthroughs in the cost and performance of advanced computers have begun to carry AI from research labs into industries such as defense and health.

Breakthroughs in the cost and performance of advanced computers have begun to carry AI from research labs into industries such as defense and health.

Google said it would not pursue AI applications intended to cause physical injury, that tie into surveillance ‘violating internationally accepted norms of human rights,’ or that present greater ‘material risk of harm’ than countervailing benefits.

Its principles also call for employees as well as customers ‘to avoid unjust impacts on people,’ particularly around race, gender, sexual orientation and political or religious belief.

Pichai said Google reserved the right to block applications that violated its principles.

A Google official described the principles and recommendations as a template that anyone in the AI community could put into immediate use in their own software.

Though Microsoft Corp and other firms released AI guidelines earlier, the industry has followed Google’s efforts closely because of the internal pushback against the drone imagery deal.

WHAT IS AI’S ROLE IN DRONE WARFARE?

The U.S. military has been looking to incorporate elements of artificial intelligence and machine learning into its drone program.

Project Maven, as the effort is known, aims to provide some relief to military analysts who are part of the war against Islamic State.

These analysts currently spend long hours staring at big screens reviewing video feeds from drones as part of the hunt for insurgents in places like Iraq and Afghanistan.

The Pentagon is trying to develop algorithms that would sort through the material and alert analysts to important finds, according to Air Force Lieutenant General John N.T. ‘Jack’ Shanahan, director for defense intelligence for warfighting support.

A British Royal Air Force Reaper hunter killer unmanned aerial vehicle on the flight line February 21, 2014 in Kandahar, Afghanistan.  Military bosses say intelligence analysts are 'overwhelmed' by the amount of video being recorded over the battlefield by drones with high resolution cameras

A British Royal Air Force Reaper hunter killer unmanned aerial vehicle on the flight line February 21, 2014 in Kandahar, Afghanistan.  Military bosses say intelligence analysts are ‘overwhelmed’ by the amount of video being recorded over the battlefield by drones with high resolution cameras

‘A lot of times these things are flying around(and)… there’s nothing in the scene that’s of interest,’ he told Reuters.

Shanahan said his team is currently trying to teach the system to recognize objects such as trucks and buildings, identify people and, eventually, detect changes in patterns of daily life that could signal significant developments.

‘We’ll start small, show some wins,’ he said.

A Pentagon official said the U.S. government is requesting to spend around $30 million on the effort in 2018.

Similar image recognition technology is being developed commercially by firms in Silicon Valley, which could be adapted by adversaries for military reasons.

Shanahan said he’ not surprised that Chinese firms are making investments there.

‘They know what they’re targeting,’ he said.

Research firm CB Insights says it has tracked 29 investors from mainland China investing in U.S. artificial intelligence companies since the start of 2012.

The risks extend beyond technology transfer.

‘When the Chinese make an investment in an early stage company developing advanced technology, there is an opportunity cost to the U.S. since that company is potentially off-limits for purposes of working with (the Department of Defense),’ the report said.

 

Sorry we are not currently accepting comments on this article.



Read more at DailyMail.co.uk