Google is removing gender pronouns from the predictive text feature found in its Gmail platform.
The feature will no longer suggest pronouns that indicate a specific gender such as ‘he’, ‘her’, ‘him’ or ‘she’ for fear of suggesting the wrong one and causing offence.
Google staff have revealed the technology will not suggest gender-based pronouns because the risk is too high that its ‘Smart Compose’ technology might predict someone’s sex or gender identity incorrectly.
The California-based firm’s Smart Compose feature is found in Gmail which has 1.5 billion users around the world. Google staff have revealed the technology will not suggest gender-based pronouns because the risk is too high of incorrect suggestions (stock)
Gmail product manager Paul Lambert said a company research scientist discovered the problem in January.
He typed ‘I am meeting an investor next week,’ and Smart Compose suggested a possible follow-up question: ‘Do you want to meet him?’ instead of ‘her’.
Consumers have become accustomed to embarrassing gaffes from auto-correct on smartphones but Google is being cautious around such a sensitive topic.
Gender issues are reshaping politics and society, and critics are scrutinising potential biases in artificial intelligence like never before.
‘Not all ‘screw ups’ are equal,’ Mr Lambert said. Gender is a ‘a big, big thing’ to get wrong, he added.
Getting Smart Compose right could be good for business as demonstrating the ability to understand the nuances of AI better than competitors is part of the company’s strategy.
Google hopes to build affinity for its brand and attract customers to its AI-powered cloud computing tools, advertising services and hardware.
Gmail has 1.5 billion users, and Mr Lambert said Smart Compose assists on 11 per cent of messages worldwide sent from Gmail.com, where the feature first launched.
Smart Compose is an example of what AI developers call natural language generation (NLG), in which computers learn to write sentences by studying patterns and relationships between words in literature, emails and web pages.
Men have long dominated fields such as finance and science, for example, so the technology would conclude from the data that an investor or engineer is ‘he’ or ‘him’ and the issue persists for many tech firms, not just Google.
Mr Lambert said the Smart Compose team of about 15 engineers and designers tried several workarounds, but none proved bias-free or worthwhile.
They decided the best solution was to limit coverage and implement a gendered pronoun ban.
It affects fewer than 1 per cent of cases where Smart Compose would propose something.
‘The only reliable technique we have is to be conservative,’ said Prabhakar Raghavan, who oversaw engineering of Gmail and other services until a recent promotion.
Google’s decision to play it safe on gender follows some high-profile embarrassments for the company’s predictive technologies.
Google is removing gender pronouns from the predictive text feature found in its Gmail platform. The feature will no longer suggest pronouns that indicate a specific gender such as ‘he’, ‘her’, ‘him’ or ‘she’ for fear of suggesting the wrong one and causing offence (stock)
WHY ARE PEOPLE SO WORRIED ABOUT AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our ‘biggest existential threat’ and likened its development as ‘summoning the demon’.
He believes super intelligent machines could use humans as pets.
Professor Stephen Hawking said it is a ‘near certainty’ that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 percent predict that it will decrease the number of jobs ‘a lot’ with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could ‘go rogue’ and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade.
They could ‘go rogue’
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don’t fully understand how they work.
If experts don’t understand how AI algorithms function, they won’t be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable ‘out of character’ decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
‘Eventually, I think human extinction will probably occur, and technology will likely play a part in this,’ DeepMind’s Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the ‘number one risk for this century’.
Musk warned that AI poses more of a threat to humanity than North Korea.
‘If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,’ the 46-year-old wrote on Twitter.
‘Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.’
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control
The company apologised in 2015 when the image recognition feature of its photo service labelled a black couple as gorillas.
In 2016, Google altered its search engine’s autocomplete function after it suggested the anti-Semitic query ‘are jews evil’ when users sought information about Jews.
Google has banned expletives and racial slurs from its predictive technologies, as well as mentions of its business rivals or tragic events.
The company’s new policy banning gendered pronouns also affected the list of possible responses in Google’s Smart Reply.
That service allow users to respond instantly to text messages and emails with short phrases such as ‘sounds good.’
Google uses tests developed by its AI ethics team to uncover new biases. A spam and abuse team pokes at systems, trying to find ‘juicy’ gaffes by thinking as hackers or journalists might, Mr Lambert said.
Workers outside the United States look for local cultural issues. Smart Compose will soon work in four other languages: Spanish, Portuguese, Italian and French.
AI experts have called on the companies to display a disclaimer and multiple possible translations.
Microsoft’s LinkedIn said it avoids gendered pronouns in its year-old predictive messaging tool, Smart Replies, to ward off potential blunders.
Warnings and limitations like those in Smart Compose remain the most-used countermeasures in complex systems, said John Hegele, integration engineer at Durham, North Carolina-based Automated Insights Inc, which generates news articles from statistics.
‘The end goal is a fully machine-generated system where it magically knows what to write,’ Mr Hegele said.
‘There’s been a ton of advances made but we’re not there yet.’