AI Model Designed to Sense Mental Disorders Based on Online Conversations

Researchers from Dartmouth College have engineered an artificial intelligence (AI) model for sensing mental disorders based on conversations on Reddit, part of an emerging wave of screening tools that use computers to examine social media posts and gain an understanding of the mental states of people.

AI Model Designed to Sense Mental Disorders Based on Online Conversations.
Xiaobo Guo, Guarini ’24, left, and Soroush Vosoughi, assistant professor of computer science, use social media as a lens into human behavior. Image Credit: Robert Gill.

What makes the new model exclusive is a focus on the emotions instead of the content of the social media texts being examined. In an article presented at the 20th International Conference on Web Intelligence and Intelligent Agent Technology (PDF), the scientists demonstrate that this method does better over time, regardless of the subjects discussed in the posts.

There are a number of reasons why people do not look for help for mental health disorders — high costs, stigma and lack of access to services are a few of the typical barriers. There is also a propensity to reduce signs of mental disorders or confuse them with stress, says Xiaobo Guo, Guarini ’24, a co-author of the study. It is likely that they will look for help with some encouragement, he says, and that is where digital screening systems can make a difference.

Social media offers an easy way to tap into people’s behaviors.

Xiaobo Guo, Study Co-Author, Guarini School of Graduate and Advanced Studies, Dartmouth College

The data are voluntary and public, available for others to read, Guo says.

Reddit, which provides a huge network of user forums, was chosen by the researchers because it has approximately half a billion active users who discuss a wide variety of subjects. The posts and comments are openly available, and the team could gather data even from 2011.

In their study, the scientists concentrated on what they refer to as emotional disorders — major depressive, anxiety, and bipolar disorders — which are described by unique emotional patterns. They examined data from users who had self-reported as experiencing one of these disorders and from users without any established mental disorders.

They programmed their model to label the emotions displayed in users’ posts and map the emotional transitions between various posts, so a post could be labeled “anger,” “sadness,” “joy,” “no emotion,” “fear,” or a combination of these. The map is a matrix that would exhibit how likely it was that a user transitioned from any one state to another, such as from anger to a neutral state without any emotion.

Various emotional disorders have their own unique patterns of emotional changes. By generating an emotional “fingerprint” for a user and comparing it to proven signatures of emotional disorders, the model can spot them.

To substantiate their results, they verified it on posts that were not utilized during training and demonstrated that the model accurately calculates which users may or may not have one of these disorders.

This method avoids a crucial problem termed “information leakage” that classic screening tools encounter, says Soroush Vosoughi, assistant professor of computer science and another co-author. Other models are designed around analyzing and banking on the content of the text, he says, and while the models demonstrate high performance, they can also be deceptive.

For instance, if a model learns to associate “COVID” with “sadness” or “anxiety,” Vosoughi explains, it will logically accept that a researcher studying and posting (quite objectively) about COVID-19 is suffering from anxiety or depression. Conversely, the new model only focuses on emotion and learns nothing about the specific event or topic illustrated in the posts.

While the scientists do not explore intervention strategies, they hope this study can show the way to prevention. In their study, they make a robust case for a more thoughtful inspection of models based on social media data.

It’s very important to have models that perform well, but also really understand their working, biases, and limitations.

Soroush Vosoughi, Study Co-Author and Assistant Professor of Computer Science, Dartmouth College

Journal Reference:

Guo, X., et al. (2022) Emotion-based Modeling of Mental Disorders on Social Media. arXiv.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.