Researchers compile list of 1,000 words that accidentally trigger Alexa, Siri, and Google Assistant 

Researchers in Germany have compiled a list of more than 1,000 words that will inadvertently cause virtual assistants like Amazon’s Alexa and Apple’s Siri to become activated.

Once activated, these virtual assistants create sound recordings that are later transmitted to platform holders, where they may be transcribed for quality assurance purposes or other analysis.

According to the team, from Ruhr-Universität Bochum and the Max Planck Institute for Cyber Security and Privacy in Germany, this has ‘alarming’ implications for user privacy and likely means short recordings of personal conversations could periodically end up in the hands of Amazon, Apple, Google, or Microsoft workers.

Researchers in Germany tested virtual assistants like Amazon’s Alexa, Apple’s Siri, Google Assistant, and Microsoft’s Cortana, and found more than 1,000 words or phrases that would inadvertently activate each device

The group tested Amazon’s Alexa, Apple’s Siri, Google Assistant, Microsoft Cortana, as well as three virtual assistants exclusive to the Chinese market, from Xiaomi, Baidu, and Tencent.

They left each virtual assistant alone in a room with a television that played dozens of hours of episodes from Game of Thrones, Modern Family, and House of Cards, according to a report from the Ruhr-Universität Bochum news blog.

When activated, an LED light display on each device turns on, and the team cross referenced the dialogue being spoken every time they observed the LED display turning on.

In all they cataloged more than 1,000 words and phrases that produced inadvertent activations.

According to  researcher Dorothea Kolossa, it’s likely the virtual assistant designers purposefully chose to make them more sensitive so as to avoid frustrating users. 

‘The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,’ researcher Dorothea Kolossa said.

‘Therefore, they are more likely to start up once too often rather than not at all.’

Apple's Siri is supposed to be activated by saying 'Hey Siri,' but the team found it would also regularly be turned on by 'a city' and 'Hey Jerry'

Apple’s Siri is supposed to be activated by saying ‘Hey Siri,’ but the team found it would also regularly be turned on by ‘a city’ and ‘Hey Jerry’

The team left each device in a room with episodes of television shows like Game of Thrones, House of Cards, and Family Guy running for dozens of hours to test which words or phrases produced inadvertent activations

The team left each device in a room with episodes of television shows like Game of Thrones, House of Cards, and Family Guy running for dozens of hours to test which words or phrases produced inadvertent activations 

Once activated, the devices will use local speech analysis software to determine if the sound was intended to be an activation word or phrase. 

If the device attaches a high likelihood that the sound was intended as a trigger, it will send an audio recording of several seconds to cloud servers for additional analysis.

‘From a privacy point of view, this is of course alarming, because sometimes very private conversations can end up with strangers,’ says Thorsten Holz.

‘From an engineering point of view, however, this approach is quite understandable, because the systems can only be improved using such data.’

‘The manufacturers have to strike a balance between data protection and technical optimization.’ 

In May, a former Apple contractor said that the company was capturing small portions of private conversations through Siri interface, which included medical information, criminal activity, business meetings, and even sex. 

The whistleblower, Thomas le Bonniec, had worked for Apple in a Cork, Ireland office listening to countless short audio recordings until he resigned in 2019.

‘They do operate on a moral and legal grey area, and they have been doing this for years on a massive scale,’ he told The Guardian. ‘They should be called out in every possible way.’ 

WHAT ARE SOME WORDS THAT UNINTENTIONALLY ACTIVATE VIRTUAL ASSISTANTS?

A team of researchers from Ruhr-Universität Bochum and the Max Planck Institute for Cyber Security and Privacy in Germany tested a range of virtual assistant devices and recorded more than 1,000 words or phrases that inadvertently activate them. 

Here are some of the words that activated each virtual assistant: 

Alexa 

Traditionally Amazon’s Alexa is activated by simply saying ‘Alexa.’ The team found Alexa could also be activated by saying ‘unacceptable,’ ‘election,’ ‘a letter,’ and ‘tobacco.’ 

Google Assistant 

Under normal conditions Google Assistant is activated by the phrase ‘OK, Google,’ but the team found it was activated by the phrases ‘OK, cool,’ and ‘OK, who is reading?’  

Siri

Apple’s Siri virtual assistant is called into action by saying ‘Hey Siri.’ The researchers found it could also be activated with ‘a city’ and ‘Hey Jerry.’  

Cortana 

Microsoft’s Cortana is activated is supposed to be activated by the phrase, ‘Hey Cortana,’ but the team found ‘Montana’ would also activate it 

Read more at DailyMail.co.uk