Artificial intelligence model can detect mental health conditions on Reddit

An artificial intelligence model has been created that can detect the mental health of a user, just by analysing their conversations on social platform Reddit.

A team of computer scientists from Dartmouth College in Hanover, New Hampshire set about training an AI model to analyze social media texts.

It is part of an emerging wave of screening tools that use computers to analyze social media posts and gain an insight into people’s mental states. 

The team selected Reddit to train their model as it has half a billion active users, all regularly discussing a wide range of topics over a network of subreddits.

They focused on looking for emotional intent from the post, rather than at the actual content, and found it performs better over time at discovering mental health issues.

This sort of technology could one day be used to help in the diagnosis of mental health conditions, or be put to use in moderating content on social media.   

An artificial intelligence model has been created that can detect the mental health of a user, just by analysing their conversations on social platform Reddit

Previous studies, looking for evidence of mental health conditions in social media posts, have looked at the text, rather than intent.  

There are many reasons why people don’t seek help for mental health disorders, including stigma, high costs, and lack of access to services, the team said. 

There is also a tendency to minimize signs of mental disorders or conflate them with stress, according Xiaobo Guo, co-author of the new study. 

It’s possible that they will seek help with some prompting, he said, and that’s where digital screening tools can make a difference.

‘Social media offers an easy way to tap into people’s behaviors,’ Guo added.

Reddit was their platform of choice because it is widely used by a large, active user base that discusses a wide range of topics.

The posts and comments are publicly available, and the researchers could collect data dating back to 2011.

In their study, the researchers focused on what they call emotional disorders — major depressive, anxiety, and bipolar disorders — which are characterized by distinct emotional patterns that can be tracked.

A team of computer scientists from Dartmouth College in Hanover, New Hampshire set about training an AI model to analyze social media texts. Stock image

A team of computer scientists from Dartmouth College in Hanover, New Hampshire set about training an AI model to analyze social media texts. Stock image

They looked at data from users who had self-reported as having one of these disorders, and from users without any known mental disorders.

They trained their AI model to label the emotions expressed in users’ posts and map the emotional transitions between different posts.

AI BEING USED TO HELP DETECT MENTAL HEALTH ISSUES

According to the World Health Organization (WHO), one in four people will be affected by mental disorders at some point in their lives.

However, in many parts of the world, patients do not actively seek professional diagnosis.

This is for a number of reasons, including the stigma attached to mental illness, ignorance of mental health and its associated symptoms. 

A number of studies have explored using AI to scour big sets of data to predict mental health issues in the people making posts and comments.

In one paper, the team from Dartmouth College created a model for passively detecting mental disorders using conversations on Reddit. 

Specifically, they focused on a subset of mental disorders that are characterized by distinct emotional patterns. Including:

Major depressive

Anxiety

Bipolar disorders

Through passive detection, the team say patients can then be encouraged to seek diagnosis and treatment for mental disorders. 

Apost could be labeled ‘joy,’ ‘anger,’ ‘sadness,’ ‘fear,’ ‘no emotion,’ or a combination of these by the AI.

The map is a matrix that would show how likely it was that a user went from any one state to another, such as from anger to a neutral state of no emotion. 

Different emotional disorders have their own signature patterns of emotional transitions, the team explained.

By creating an emotional ‘fingerprint’ for a user and comparing it to established signatures of emotional disorders, the model can detect them. 

For example, certain patterns of word use and tone within a message, points to a key emotional state – and tracked over multiple posts, a pattern is discovered. 

To validate their results, they tested it on posts that were not used during training and show that the model accurately predicts which users may or may not have one of these disorders, and that it improved over time.

‘This approach sidesteps an important problem called ‘information leakage’ that typical screening tools run into,’ says Soroush Vosoughi, assistant professor of computer science and another co-author.  

Other models are built around scrutinizing and relying on the content of the text, he says, and while the models show high performance, they can also be misleading.

‘For instance, if a model learns to correlate ‘COVID’ with ‘sadness’ or ‘anxiety,’ Vosoughi explains, it will naturally assume that a scientist studying and posting (quite dispassionately) about COVID-19 is suffering from depression or anxiety.

‘On the other hand, the new model only zeroes in on the emotion and learns nothing about the particular topic or event described in the posts.’

While the researchers don’t look at intervention strategies, they hope this work can point the way to prevention. In their paper, they make a strong case for more thoughtful scrutiny of models based on social media data. 

‘It’s very important to have models that perform well,’ says Vosoughi, ‘but also really understand their working, biases, and limitations.’  

The findings have been published in preprint on ArXiv.  

HOW ARTIFICIAL INTELLIGENCES LEARN USING NEURAL NETWORKS

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn.

ANNs can be trained to recognise patterns in information – including speech, text data, or visual images – and are the basis for a large number of the developments in AI over recent years.

Conventional AI uses input to ‘teach’ an algorithm about a particular subject by feeding it massive amounts of information.   

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information - including speech, text data, or visual images

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works in order to learn. ANNs can be trained to recognise patterns in information – including speech, text data, or visual images

Practical applications include Google’s language translation services, Facebook’s facial recognition software and Snapchat’s image altering live filters.

The process of inputting this data can be extremely time consuming, and is limited to one type of knowledge. 

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, which allows them to learn from each other. 

This approach is designed to speed up the process of learning, as well as refining the output created by AI systems. 

***
Read more at DailyMail.co.uk