AI ethics board founded by Google will include eight leading experts – but will it be enough stop the search giant prying into our private lives?
- Advisory board will consider some of Google’s most complex challenges
- It will advise on matters relating to the development and application of AI
- Designed to help Google avoid any further AI faux pas or privacy scandals
- Includes AI academics, philosophers and a former US deputy secretary of state
Google has set up an external AI ethics council to guide the tech giant away from morally questionable uses of its technology and encroaching on the privacy of its customers.
It will advise the search giant on matters relating to the development and application of its artificial intelligence research.
Google has been embroiled in past controversies regarding the use of its AI, as well as the way it protects the data it gathers.
It established an internal AI ethics board in 2014 when it acquired DeepMind but this has been shrouded in secrecy with no details ever released about who it includes.
The firm is a world-leader in many aspects of AI and the eight people recruited for the advisory board will ‘consider some of Google’s most complex challenges’.
Members of the board include Joanna Bryson, an associate professor at the University of Bath and William Joseph Burns, former US deputy secretary of state.
Google has been embroiled in past controversies regarding the use of its AI concerns over it encroaching on the privacy of customers. The board, formally known as the Advanced Technology External Advisory Council (ATEAC), will meet four times in 2019 (file photo)
The board, formally known as the Advanced Technology External Advisory Council (ATEAC), was announced at MIT Technology Review’s EmTech Digital this week.
It has been specially curated to steer the Mountain View-based firm away from any future controversies by ensuring it fully considers morality while developing its artificial intelligence.
Google uses AI in many high-profile forms, including its smart speaker, Google Home, and DeepMind, its specialist AI division.
Privacy concerns piqued when Google absorbed its DeepMind Health AI lab – a leading UK health technology developer last year.
The news raised concerns about the privacy of NHS patient’s data which is used by DeepMind and could therefore be commercialised by Google.
The AI ethics board was specially curated to steer the Mountain View-based firm away from any controversies by ensuring it fully considers morality while developing its artificial intelligence (file photo)
DeepMind was bought by Google’s parent company Alphabet for £400 million ($520m) in 2014 and had maintained independence until the acquisition in November.
But now the London-based lab shares operations with the US-based Google Health unit.
Google had also previously received criticism from the public and internal members of staff over Project Maven, a collaboration between Google and the US military to use its AI to control drones destined for enemy territory.
Google decided not to renew this contract in June 2018 following protest resignations from some employees.
Kent Walker, SVP of Global Affairs at Google, said in a blog post: ‘ Last June we announced Google’s AI Principles,, an ethical charter to guide the responsible development and use of AI in our research and products.
‘To complement the internal governance structure and processes that help us implement the principles, we’ve established an Advanced Technology External Advisory Council (ATEAC).
‘This group will consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work.’
He added: ‘This inaugural Council will serve over the course of 2019, holding four meetings starting in April.’