Google Glass app can tell autistic children what to say  

One of the things that children with Autism Spectrum Disorder (ASD) can struggle with is maintaining the back and forth of a conversation. 

But now, researchers have developed an app that can serve as a social skills coach for children with ASD.

The app, called Holli, works with Google Glass and listens to the other person during a conversation, providing the wearer with options of what to say next. 

One of the things that children with Autism Spectrum Disorder (ASD) can struggle with is maintaining the back and forth of a conversation. Researchers developed an app, called Holli, that can serve as a social skills coach for children with ASD. It works with Google Glass and listens to the other person during a conversation, providing the user with options of what to say

The study, conducted by researchers based at the University of Toronto, involved testing the app on 15 children with ASD. 

The user and the other speaker’s speech are captured through the on-board microphone, and the Google speech recognition engine is used to translate the spoken word into text. 

The researchers developed a system that processes this text and generates an appropriate set of responses, which are displayed to the user via the Google Glass display. 

Then, once the user speaks, saying one of the responses, the prompts disappear from the display and the app listens to the next speaker in the conversation. 

The display reads ‘listening’ while it waits to hear somebody speak.

HOW THE HOLLI APP WORKS WITH GOOGLE GLASS

A newly developed app called Holli, works with Google Glass and listens to the other person during a conversation, providing the wearer with options of what to say next.

The user and the other speaker’s speech are captured through the on-board microphone, and the Google speech recognition engine is used to translate the spoken word into text. 

The researchers, based at the University of Toronto, developed a system that processes this text and generates an appropriate set of responses, which are displayed to the user via the Google Glass display.

An overview of the Holli system. The user and the other speaker's speech are captured through the on-board microphone on Google Glass glasses, and the Google speech recognition engine is used to translate the spoken word into text. The researchers developed a system that processes this text and generates an appropriate set of responses

An overview of the Holli system. The user and the other speaker’s speech are captured through the on-board microphone on Google Glass glasses, and the Google speech recognition engine is used to translate the spoken word into text. The researchers developed a system that processes this text and generates an appropriate set of responses

Then, once the user speaks, saying one of the responses, the prompts disappear from the display and the app listens to the next speaker in the conversation. 

To test the app, the researchers recruited 15 children with ASD, aged between 8 and 16 years-old, who were able to read without glasses. 

The participants were given an overview of Holli and were guided through a restaurant-themed conversation which included 10 interactions.

A research assistant played the role of a restaurant staff, prompting the participant for their order. 

Afterwards, the participants filled out a satisfaction questionnaire and answered questions posed by the researchers.  

The results showed that all the participants successfully completed the 10-interaction exchange while using Holli. 

Examples of Holli prompts. The user and the other speaker's speech are captured through a microphone on Google Glass, and the Google speech recognition engine is used to translate the spoken word into text. The researchers developed a system that processes this text and generates an appropriate set of responses, which are displayed to the user via the display 

Examples of Holli prompts. The user and the other speaker’s speech are captured through a microphone on Google Glass, and the Google speech recognition engine is used to translate the spoken word into text. The researchers developed a system that processes this text and generates an appropriate set of responses, which are displayed to the user via the display 

The system was almost 90 per cent accurate at detecting the researcher’s speech, and the average user response time 2.5 seconds. 

In addition, the Holli system was able to understand what the user was saying before he/she finished saying it. 

‘This result demonstrates the speech recognition response time is robust enough to process speech, make appropriate predictions, and generate responses in real time,’ the researchers wrote in their study. 

‘This result also supports further development on the Glass for similar applications.’

The users were also satisfied with Holli, indicating that they enjoyed the app, understood how it worked and that it helped them talk to people. 

Example use of Holli in a conversation. The system was almost 90 per cent accurate at detecting the researcher's speech, and the average user response time 2.5 seconds. In addition, the Holli system was able to understand what the user was saying before he/she finished saying it

Example use of Holli in a conversation. The system was almost 90 per cent accurate at detecting the researcher’s speech, and the average user response time 2.5 seconds. In addition, the Holli system was able to understand what the user was saying before he/she finished saying it

 

Read more at DailyMail.co.uk