Future of Mental Health

Smartphones May Hold The Key To America's Mental Health Crisis

Your phone may know more about your mental health than your therapist. Is it time to tap that data?

Written by Miriam Fauzia
Updated: 
Originally Published: 
Dewey Saunders/Inverse; Getty Images
The Future of Mental Health

Evan Jordan can tell when you are stressed, even though he’s never met or interacted with you. An assistant professor of health and wellness design at Indiana University’s School of Public Health-Bloomington, Jordan and his team use GPS tracking to find connections between a person’s environment and their mental health. The data could be stunningly revealing.

Why? Because where we are in the world and how much we move is a big indicator of our mental health. Studies suggest that if we take a lot of walks to the park, we’re statistically less likely to have anxiety and depression. If we, on the other hand, are couch-bound and away from greenery, we’re more likely to be diagnosed with those two conditions. It’s certainly not a fool-proof diagnostic tool but rather a potentially powerful data point, one that requires nothing more than our smartphones.

Jordan’s research is part of a new wave of technologies to monitor mental health without directly asking how you feel. In a way, your smart devices already have a good sense of whether you’re stressed, relaxed, elated, sad — or experiencing long-term anxiety or depression. That is, they have the data points – sleep quality, heart rate, your movement — that some experts like Jordan believe give a reliable diagnosis for mental health disorders. With the right app, our phones may be able to arrive at a diagnosis or treatment quicker, enable clinicians to remotely keep a closer eye on vulnerable individuals, and improve the overall quality and accessibility of mental health support.

It’s a future of mental health that start-ups like Verily, Mindstrong, and others believe is just right around the corner, with devices that could help bring light to the two-thirds of Americans (by one study) who live with undiagnosed depression or the more than one-third (by some estimates) who live with undiagnosed anxiety. But what of false positives, missed diagnoses, or hacked phones that tell the world about mental health diagnoses you didn’t even know you had?

How passive monitoring works

If you’ve ever been to a clinician or other mental health professional for anxiety or depression, you’re probably familiar with the face-to-face (or virtual) interaction of questions along with neuropsychological testing used to assess emotions and cognition. For depression, you also might be asked to complete a self-assessment — the Patient Health Questionnaire-9 (or PHQ-9) is the most common one — designed to measure the severity of depressive symptoms through questions asking you to gauge your feelings of sadness, loss of interest, sleep quality, or thoughts of self-harm.

These self-assessments provide a snapshot of someone’s headspace at one point in time. As such, they run the risk of being unreliable, especially if an individual suffers from memory or recall issues (depression is associated with short-term memory loss; anxiety and stress can also lead to poor memory). It becomes difficult to say what other factors are influencing a person’s mental health or what it will look like over a period of time, even if you’re regularly visiting a clinician or therapist.

This is, in part, what’s so alluring about wearables. By measuring signals that fall under behavioral, like location or smartphone dependence, physiological like heart rate variability, and social like making calls and texts, clinicians may be able to leverage a diverse collection of data to make better mental health assessments even for more illnesses like bipolar disorder or schizophrenia.

By measuring signals that fall under behavioral, like location or smartphone dependence, physiological like heart rate variability, and social like making calls and texts, clinicians may be able to leverage a diverse collection of data to make better mental health assessments even for more illnesses like bipolar disorder or schizophrenia.

Smith Collection/Gado/Archive Photos/Getty Images

And there’s science to back up this approach. One 2019 study out of the University of Arizona looking at older adolescents found that kids who were psychologically dependent on their smartphones were at an increased risk for depression and loneliness. Another 2015 study out of Northwestern University found the more minutes someone spent on their phone and the less they spent time outside their house (determined by a phone’s GPS data), enabled the researchers to identify people with depressive symptoms with 87 percent accuracy.

Indiana University’s Evan Jordan says what’s intriguing about using GPS data is seeing the intersection between environment and physiology. It’s a territory his lab is exploring in a project where adult volunteers don a wearable and answer a questionnaire on their phone in real time whenever they’re feeling stressed.

Research to date has found green spaces — whether trekking through the forest, lounging in a garden, or even admiring street trees — do wonders for one’s mental and physical health. One 2023 Finnish study published in the journal Occupational and Environmental Medicine reported frequent visits to green and blue spaces (natural environments near bodies of water) led to 33 percent lower odds of using mental health medications among participants along with a 26 and 36 percent lower odds of using asthma and blood pressure medications, respectively.

Jordan speculates he might see comparable results when it comes to improved mental health, but there could be other spaces that provide a mental health benefit.

“We can start building models of areas where people spend a lot of time but don’t experience stress and start to emphasize spending time in those areas,” he tells Inverse. “It might be green spaces — it probably will be — but it might be other places too, that we just don’t know [yet].”

Other scientists like Robert Hirten, clinical director of Mount Sinai’s Hasso Plattner Institute for Digital Health, are looking at how heart rate variability (or HRV) — the amount of time between your heartbeats that fluctuates slightly — can distinguish a person’s psychological resilience, a trait research suggests acts as a protective factor against the development of mental health disorders.

“We can start building models of areas where people spend a lot of time but don’t experience stress and start to emphasize spending time in those areas,” he tells Inverse. “It might be green spaces — it probably will be — but it might be other places too, that we just don’t know [yet].”

“HRV lets you understand what’s going on in someone’s autonomic nervous system,” Hirten tells Inverse. “It’s one of the two main components of your physical stress response.”

In a study published in May in the journal JAMA Open, he and his colleagues found that HRV, collected from Apple Watches of over 300 participants during a two-week period, was able to predict individuals with high or low psychological resilience based on machine learning models. (Hirten, a gastroenterologist, previously explored HRV as a way to predict when patients with inflammatory bowel disease may experience a flare, which can often be brought on by nervousness or anxiety.)

Some studies have found depressed individuals exhibit a different tone or speech pattern with more frequent pauses or a softer, monotone voice.

“What these models are able to do is recognize subtle differences in the [HRV data] that are happening each day in an individual and relate that complex change happening over a 14-day period to their individual psychological resilience,” explains Hirten. “[These models] are able to identify very nuanced or detailed changes happening [in the data] to come up with these relationships that would be otherwise challenging to visualize or see.”

Aside from GPS or HRV data, there’s also the potential of analyzing speech characteristics through a smartphone app. Some studies have found depressed individuals exhibit a different tone or speech pattern with more frequent pauses or a softer, monotone voice.

These findings have led to the rise of “voice biomarker” technology, as marketed by companies like Boston-based Sonde Health and Kintsugi, where an AI-powered software listens to snippets of an individual’s speech for any health red flags associated with depression or even neurological impairments like Alzheimer’s disease.

“From as little as 20 seconds of free-form speech, we’re able to detect with 80 [percent] accuracy if somebody is struggling with depression or anxiety,” Grace Chang, founder and CEO of Kintsugi, told Axios last October.

The challenges ahead

For Tazneem Choudhury, a computing and information science professor at the Jacobs-Technion-Cornell Institute at Cornell University, passive monitoring using wearables holds a lot of promise. But she believes there are still significant challenges, and risks, to iron out.

For one, there’s a considerable amount of guesswork on the part of clinicians when confronted with this surplus of wearable data.

“From a clinical side, I think that the challenge often is that you have this standardized measure and [clinicians] are so reliant on that [so] how to effectively use this new type of measurement isn’t explored,” Choudhury tells Inverse. “Another issue, from a technology point of view, is that we need to understand clinicians are overburdened. If you give them a lot of data, they don’t know what to do with it or how to take clinical action. So how do you present this information in a way that tells the doctor what they can do next?”

Making the data easier to fathom may lead to autonomizing the medical decision process with the help of artificial intelligence (AI), which has increasingly encroached into the mental health space. But Chowdhury says there’s quite a lot of risk when you involve AI, whether it be misdiagnosing or entirely missing an individual’s underlying mental health issue.

“The challenge is that [AI] is still not good enough, it can make mistakes. We need to understand what is the right kind of handshake between a human and an AI system,” she says. “Unless we know how these systems work together, they can either over promise, under delivery, or expose the patient or clinician to unnecessary risks.”

With the growing popularity of large language models, more and more individuals are swapping qualified mental health professionals for AI-powered chatbots. Social media is rife with how-to instructions and raving testimonials on Reddit and TikTok where some individuals claim non-human therapists are better than the real deal. This leads to concerns among experts that with passive monitoring systems, folks may be inclined to self-treat without the guidance or supervision of a clinician. Choudhury says that scenario is inevitable and maybe unavoidable. The onus is on researchers, developers, and tech companies to build sturdy guardrails that prevent mishaps even when users are self-treating. (ChatGPT does give a disclaimer that it’s not a licensed therapist unable to provide therapy or diagnose any conditions.)

There’s a considerable amount of guesswork on the part of clinicians when confronted with this surplus of wearable data.

Krongkaew/Moment/Getty Images

“In mental health, you want to empower patients as there are some people who may be reluctant [to seek help], but you don’t want to completely drop the ball when there’s a crisis...if you can have a connection with a clinician, then in a crisis context, you have a clinical pathway to take action,” she says.

Data breaches and digital privacy concerns may be the most worrisome thing about using wearables and apps to track — or create — medical information. According to the American Civil Liberties Union, there’s little transparency about what happens to your data once a health app collects it. While the privacy of health data generally falls under the purview of the Health Insurance Portability and Accountability Act (or HIPAA), this hasn’t applied to health apps as they’re not considered “covered entities.”

It’s yet to be seen whether passive monitoring with smart devices could serve as a backdoor for unscrupulous hackers. Still, it’s not far-fetched, and the consequences — manipulating the mental health of vulnerable individuals — could be catastrophic.

Researchers like Jordan and Hirten know passive monitoring with wearables and other technologies isn’t foolproof and that continuous, sustained research is needed to work out the kinks.

“There’s a huge number of people who need care and not enough clinical psychologists [or other trained professionals] who can provide it,” says Jordan. “I don’t view them as replacing [clinicans]; they’re complementary… you can go from nothing to something, and that is actually pretty good.”

THE FUTURE OF MENTAL HEALTH takes a deep dive into the technologies that may transform the way we think about and address mental health and where humanity could go if we succeed. Read the rest of the stories here.

This article was originally published on

Related Tags