Can AI-Driven Voice Analysis Help Identify Mental Disorders?

This article is part of a limited series about the potential of artificial intelligence to solve everyday problems.

Imagine a test as quick and easy as taking your temperature or blood pressure that could reliably detect an anxiety disorder or predict an impending depressive relapse.

Healthcare providers have many tools to assess a patient’s physical condition, but no reliable biomarkers — objective indicators of medical conditions observed from outside the patient — to assess mental health.

But some artificial intelligence researchers now believe that the sound of your voice may hold the key to understanding your mental state — and AI is perfectly suited to detecting changes that are otherwise difficult, if not impossible, to detect. The result is a suite of apps and online tools for tracking your mental status, as well as programs that provide real-time mental health assessments to telemedicine and call center providers.

Psychologists have long known that certain mental health problems cannot be identified by just listening What one person says though how They say so, said Maria Espinola, a psychologist and an assistant professor at the University of Cincinnati College of Medicine.

In depressed patients, said Dr. Espinola, “Their speech is generally more monotonous, flat and soft. They also have a reduced tonal range and lower volume. You take more breaks. They quit more often.”

Patients with anxiety feel more tension in their body, which can also change the tone of their voice, she said. “They tend to speak faster. They have more difficulty breathing.”

Today, these types of voice characteristics are used by machine learning researchers to predict depression and anxiety, as well as other mental illnesses such as schizophrenia and post-traumatic stress disorder. Using deep learning algorithms can uncover additional patterns and features captured in short speech recordings that may not be apparent even to trained experts.

“The technology we’re using now can extract meaningful features that even the human ear can’t pick up,” said Kate Bentley, an assistant professor at Harvard Medical School and a clinical psychologist at Massachusetts General Hospital.

“There’s a lot of excitement about finding biological or more objective indicators of psychiatric diagnosis that go beyond the more subjective forms of assessment traditionally used, like clinically graded interviews or self-reports,” she said. Other clues researchers are tracking include changes in activity levels, sleep patterns, and social media data.

These technological advances come at a time when the need for mental health care is particularly acute: according to a report by the National Alliance on Mental Illness, one in five adults in the United States had some mental illness as of 2020. And the numbers continue to rise .

Although AI technology can’t solve the shortage of qualified mental health providers — there aren’t nearly enough to meet the country’s needs, said Dr. Bentley — there is hope it can lower the hurdles to getting a correct diagnosis, help doctors identify patients who may be reluctant to seek treatment, and facilitate self-monitoring between visits.

“A lot can happen between appointments, and technology can really offer us the potential to improve monitoring and assessment in a more continuous way,” said Dr. Bentley.

To test this new technology, I first downloaded the Mental Fitness app from Sonde Health, a health tech company, to see if my discomfort was a sign of something serious or if I was just languishing. The free app, described as a “voice-activated mental fitness tracking and recording product,” invited me to record my first check-in, a 30-second oral journal entry that would rate my mental health on a scale of 1-100 .

A minute later I had my score: a not good 52. “Warning,” the app warned.

The app indicated that the level of liveliness detected in my voice was particularly low. Did I sound monotonous just because I tried to speak softly? Should I heed the app’s suggestions to improve my mental fitness by going for a walk or decluttering my space? (The first question might indicate one of the possible flaws of the app: as a consumer, it can be difficult to know why Your voice volume fluctuates.)

Later, when I felt jittery between interviews, I tested another voice analysis program that focused on detecting anxiety. The StressWaves test is a free online tool from Cigna, the health and insurance conglomerate, developed in collaboration with AI specialists Ellipsis Health to assess stress levels using 60-second samples of recorded speech.

“What keeps you up at night?” was the request of the website. After I spent a minute describing my ongoing concerns, the program evaluated my recording and emailed me saying, “Your stress level is moderate.” Unlike the probe app, Cigna’s email offered no helpful ones Self Improvement Tips.

Other technologies add a potentially helpful layer of human interaction, like Kintsugi, a Berkeley, California-based company that raised $20 million in Series A funding earlier this month. Kintsugi is named after the Japanese practice of repairing broken pottery with veins of gold.

Kintsugi was founded by Grace Chang and Rima Seiilova-Olson, who connected through shared past experiences fighting for access to mental health care. Kintsugi develops technology for telemedicine and call center providers that can help them identify patients who might benefit from further support.

For example, using Kintsugi’s speech analysis program, a nurse could be asked to take an extra minute to ask a distressed parent with a colicky infant about their own well-being.

One issue with developing these types of machine learning technologies is the problem of bias — ensuring programs work the same for all patients, regardless of age, gender, race, nationality, and other demographics.

“For machine learning models to work well, you really need a very large and diverse and robust data set,” Ms Chang said, noting that Kintsugi used voice recordings from around the world in many different languages ​​to guard against this problem in particular.

Another major concern in this burgeoning field is privacy — particularly voice data that can be used to identify individuals, said Dr. Bentley.

And even if patients consent to the recording, the question of consent sometimes arises. In addition to assessing a patient’s mental health, some speech analysis programs use the recordings to develop and refine their own algorithms.

Another challenge, said Dr. Bentley, is consumers’ potential distrust of machine learning and so-called black-box algorithms, which work in ways that even the developers themselves can’t fully explain, specifically what features they use to make predictions.

“It’s about creating the algorithm and it’s about understanding the algorithm,” said Dr. Alexander S. Young, interim director of the Semel Institute for Neuroscience and Human Behavior and Chair of Psychiatry at the University of California, Los Angeles, echoed the concerns many researchers have about AI and machine learning in general: that during the training phase of the program there is little, if any, human oversight.

For now, Dr. Young cautiously optimistic about the potential of voice analysis technologies, particularly as a tool for patient self-monitoring.

“I think you can model people’s mental health, or approximate their mental health in general,” he said. “People would like to be able to monitor their own status, especially with chronic diseases.”

But before automated speech analysis technologies enter the mainstream, some require rigorous testing of their accuracy.

“We really need more validation of not only language technology, but also AI and machine learning models built on top of other data streams,” said Dr. Bentley. “And we need to achieve that validation through large-scale, well-designed representative studies.”

Until then, AI-powered speech analysis technology remains a promising but untested tool that may be an everyday way to take the temperature of our mental well-being.

Leave a Reply

Your email address will not be published.