Can texts replace your therapist?

(Credit: Getty Images)

There are benefits and drawbacks of using smartphone and internet technology to administer mental health care, report researchers.

Interacting with a machine may seem like a strange and impersonal way to seek mental health care, but advances in technology and artificial intelligence are making that type of engagement more and more a reality.

“Talking to a machine may feel like a safer way to share experiences without feeling ashamed.”

Online sites such as 7 Cups of Tea and Crisis Text Line are providing counseling services via web and text, but hospitals and mental health facilities have not widely used this style of treatment.

Adam Miner, an AI psychologist and instructor in Stanford University’s psychiatry and behavioral sciences department, Arnold Milstein, a professor of medicine and the director of the Clinical Excellence Research Center, and Jeff Hancock, a professor of communication and director of the Center for Computational Social Science, examine the benefits and risks of this trend in the Journal of the American Medical Association.

They discuss how technological advances now offer the capability for patients to have personal health discussions with devices like smartphones and digital assistants.

Here, Miner, Milstein, and Hancock answer questions about this trend.

Q

Why would conversational agents—software programs that converse with users through voice or text—be effective for mental health care? Which aspects of mental health care could they be applied to?

A

Miner: Talking to another person about mental health can be scary and often treatment is hard to access. Conversational agents may allow people to share experiences they don’t want to talk about with another person. If successful, this technology could recognize and respond to mental health needs. People may be more honest about their symptoms.

Hancock: They also can be available when needed. Delivering health care when it’s most needed can make these conversational agents really effective for people.

Q

How could interacting with this technology be more beneficial to a patient than a human mental health professional?

A

Hancock: I’m not sure that it could ever be more beneficial than interacting with a human mental health professional, but they could play a role in simply being available. That is, there are only so many mental health professionals, and they can’t be of assistance to all who need them all the time. So, these programs can at least play a role in helping to triage.

Miner: Most people don’t like feeling judged. Talking to a machine may feel like a safer way to share experiences without feeling ashamed. Also, their value may not be in being “better” than a well-trained clinician, but in their accessibility and scalability.

Q

Are there risks associated with this technology?

A

Miner: If a user has a negative experience disclosing mental health problems to a conversational agent, he or she may be less willing to seek help in the future. Also, human-to-human connection is an important part of healing. A balance must be struck between high-tech and high-touch treatment.

Hancock: Yes, and importantly, we don’t even know what all the risks are because the psychological aspects are so understudied. One concern is what happens over longer interactions—does the benefits of interacting with a conversational agent fade or even become negative? Could interacting with a machine over time lead to a sense of loneliness or disconnection, or even become a crutch in the form of preferring to interact with a machine than other people?

Q

What are some of the dangers with regards to privacy?

A

Miner: Privacy is incredibly important and we have to get it right to build trust. User expectations of privacy are unclear. A conversation may feel more private, but might have a higher risk of being remembered forever or shared in unexpected ways through social media or services that track online behavior.

Q

Your article mentions that hundreds of thousands of people have already engaged in similar technology-based interactions—for example, 7 Cups of Tea and Talkspace. What must occur for widespread adoption at hospitals, mental health facilities, etc.?

A

Milstein: Mainstream health care organizations are unlikely to adopt this innovation until there is plausible evidence of therapeutic benefit and applicability of HIPAA privacy rules is clarified.

Miner: There is a growing demand for safe, scalable, and cost-effective mental health treatment. Clinical trials can address safety and efficacy, but clarity around user expectations and rules governing medical devices are needed.

Hancock: The success of 7 Cups of Tea and others, like Crisis Text Line, indicates that mental health through text and with the phone or computer is viable. What’s needed next is improved technology along with the required research to understand what kind of conversational agent will be most beneficial and avoid harms. Some of our research suggests that people can get the same kind of psychological benefits disclosing to a machine as to another human—at least in a one-off interaction. We still don’t know about long-term interactions, however.