Could Computers Use This Idea to Help Diagnose Mental Illness?
I recently spotted a headline that made me stop and read the story. It was from IFLScience and the article was “People with Depression Use Language Differently – Here’s How To Spot It.” Those of you who know of my abiding interest in mental illness, science and language (I once studied to be a linguist) can imagine how my virtual ears perked up. The article also referenced technology and how it was being used to analyze people’s language.
But before we get to the article, a little background on computers, language and therapy.
As early as 1950, computer scientist Alan Turing devised a test (now named after him) that challenged computer programmers to create a device that could produce language indistinguishable from actual human communication. This was in the days before computers could speak, so all the interactions were text only. Pairs of computer-and-human partners carried on a conversation, and an observer had to evaluate which of each pair was the machine. If the machine produced language that the evaluator could not distinguish from human speech, the computer was said to have passed the Turing Test. It was an early achievement in machine learning and artificial intelligence. (Only one computer is said to have ever passed the Turing Test, and even that achievement is in doubt.)
In the mid-1960s, another computer program called ELIZA was developed by Joseph Weizenbaum. ELIZA used “scripts” to reply to human communication. One of the most successful scripts was “DOCTOR,” which was based on therapy developed by psychologist Carl Rogers, who was known for repeating non-directional versions of what a client had said back to him or her. (“I’m mad at my mother.” “Why does your mother make you feel mad?”) Surprisingly, many users felt that the program displayed human-like feelings and it was believed that ELIZA could be helpful as an adjunct to psychotherapy — a very early version of a psychological chatbot, in other words.
Now we get to the IFLScience article (which was previously published in Clinical Psychological Science and The Conversation). It divided the language that the computers analyzed into content and style. Not surprisingly, the content of the writings of depressed persons contained more words like “lonely,” “sad” and “miserable.”
But the computers noticed that in the realm of style, the depressed subjects were more likely to use first-person pronouns (“I,” “me,” “mine”) than third-person pronouns (“she,” “he,” “him,” “they”). Researchers theorized that people with depression were more focused on themselves than on other people and that pronouns are a more reliable marker of depression than content words.
On mental health forums that were analyzed, “absolutist” words such as “absolutely,” “nothing” and “completely” were even better markers of depression than content words or pronouns, especially in anxiety and depression forums, and particularly in suicidal ideation forums.
What does this all mean? Researchers are using the knowledge gained from the studies to analyze natural language specimens such as blog posts, with results that sometimes outperform those of therapists. And because of machine learning efforts, the computers are only expected to get better at identifying not just depression and anxiety, but other conditions such as self-esteem problems.
At the moment, computers can analyze only written samples of language produced by those suspected of having psychological problems: blogs, poems, letters. This, of course, might be perfect for use with those chatbots that rely on text-only interactions. And psychologists might be trained to listen for language cues in the conversations they have with clients.
Will the depressive language experiments prove more successful than the Turing Test in mimicking human interaction and more functional than ELIZA in providing helpful feedback to those suffering?
Personally, I hope that the experiments continue, and continue to show promise. Although computers are not likely to take the place of human therapists, they may be able to help identify people who need the most immediate help, or even assist in filling the gap for populations with no easy access to psychological services.
Read the IFLScience article here.
Photo by Mikayla Mallek on Unsplash