Researchers have created an algorithm that scans people's private Facebook messages to detect signs of mental health problems. The tool was tested against a group of 223 volunteers who allowed the team to review a year's worth of messages. Its effectiveness was found to be comparable to PHQ-9, a common 10-question survey used to screen for depression.
All of the volunteers in the survey had an official diagnosis from a psychiatric professional. With the knowledge of a definitive date when the diagnosis was made, the researchers were able to make predictions using messages and pictures from before the volunteer knew about their mental illness. Signals like swear words and pictures with more bluish colors were linked to different illnesses.
Such technology comes with obvious privacy implications — data could be misused by employers or insurance companies. Advertisers might target vulnerable people with snake oil products. One cynical way to view the results from the study is that analyzing a year's worth of messages can diagnose depression no better than a 10-question survey, so why risk it?
Predictive health — Experts say, if implemented in a safe manner, such a predictive tool could detect symptoms of mental illness long before a person would typically get a clinical diagnosis, and that could make a big difference in people's lives.
“If we catch these symptoms much earlier on, there could be other mechanisms to alleviate these concerns that don’t necessarily need a trip to the doctor,” said Munmun de Choudhury, a professor of interactive computing at Georgia Tech, who wasn't involved in the study but who is interested in the subject matter.
The global market for predictive health is a big one because catching health problems before they become serious can make treatment more affordable. Aetna and other healthcare providers have launched wellness programs that offer members a subsidized Apple Watch so long as they promise to take care of their health and meet stipulated activity goals.
Major social media platforms already try and intervene when they detect signs of a mental health crisis. Searching for any term on Facebook that may indicate suicidal risk will trigger the platform to display suicide prevention resources. If the platform's artificial intelligence detects a post that may indicate imminent harm, it is flagged to moderators who can contact law enforcement. In the future, social media users could potentially opt-in to a plugin that warns them when they may be at risk of mental illness.
Holistic measurement — The researchers involved in creating the new algorithm don't believe it will ever replace traditional care. The real hope is that someday this type of mental health prediction could be used as just one data source in a larger toolkit. Since social media provides a continuous record of a person's thoughts (or those of regular users at least), it could complement the infrequent, hour-long interviews that clinicians might have with a patient and help them monitor progress through a long-term treatment program. These types of uses would be beneficial if people trust trust that their data won't be abused.