Google seeks to grant $25 million to AI for ‘good’…

Google will give away about $25 million globally next year to humanitarian and environmental projects seeking to use artificial intelligence (AI) to speed up and grow their efforts.

The ‘AI Impact Challenge’ is meant to inspire organizations to ask Google for help in machine learning, a form of AI in which computers analyze large datasets to make predictions or detect patterns and anomalies.

Google’s rivals Microsoft Corp and Amazon.com Inc tout ‘AI for good’ initiatives too.

Google announced on Monday that it would grant about $25 million globally next year to humanitarian and environmental projects seeking to use artificial intelligence (AI) to speed up and grow their efforts.

GOOGLE’S SEVEN RULES OF AI 

Google says for its AI to be used, projects must: 

1. Be socially beneficial. 

2. Avoid creating or reinforcing unfair bias. 

3. Be built and tested for safety. 

4. Be accountable to people. 

5. Incorporate privacy design principles. 

6. Uphold high standards of scientific excellence. 

7. Be made available for uses that accord with these principles. 

 

Humanitarian and environmental projects seeking to use artificial intelligence (AI) to speed up and grow their efforts. 

Focusing on humanitarian projects could aid Google in recruiting and soothe critics by demonstrating that its interests in machine learning extend beyond its core business and other lucrative areas, such as military work. 

After employee backlash Google this year said it would not renew a deal to analyze U.S. military drone footage.

Google AI Chief Operating Officer Irina Kofman told Reuters the challenge was not a reaction to such pushback, but noted that thousands of employees are eager to work on ‘social good’ projects even though they do not directly generate revenue.

At a media event on Monday, Google showcased existing projects similar to those it wants to inspire. 

In one, Google’s computers recently learned to detect the singing of humpback whales with 90 percent precision from 170,000 hours of underwater audio recordings gathered by the U.S. government.

The audio previously required manual analysis, meaning ‘this is the first time this dataset has been looked at in a comprehensive way,’ said Ann Allen, a National Oceanic and Atmospheric Administration ecologist.

Identifying patterns could show how humans have affected whales’ migration, Allen said. Eventually, real-time audio analysis could help ships avoid whale collisions.

To be sure, the data have gaps since whales are not always singing, and getting vessels to use animal location data could require new regulation, two whale experts said.

Julie Cattiau, a Google product manager for the whale work, said Google plans to make the whale software available to additional organizations to improve.

Google will not charge for such tools, Cattiau said, though users could choose to pair them with paid Google cloud services.

The move comes after Google said it is calling off its controversial 'Project Maven' program with the Pentagon. The contract is set to expire in 2019 and Google Cloud CEO Diane Greene said they won't renew it past then

The move comes after Google said it is calling off its controversial ‘Project Maven’ program with the Pentagon. The contract is set to expire in 2019 and Google Cloud CEO Diane Greene said they won’t renew it past then

Jacquelline Fuller, vice president of Google nonprofit arm Google.org, said impact challenge applications would be due Jan. 20 and judged on total potential beneficiaries, feasibility and ethical considerations.

This year, Google.org filtered grant applications with its own machine-learning tool for the first time, Fuller said, after receiving a record number of entries for an Africa-specific competition.

GOOGLE’S AI BAN: WHERE WILL ITS SOFTWARE NOT BE USED? 

Google says it will not let its AI be used for: 

  • Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

 

Read more at DailyMail.co.uk