Return to site

Is AI a Good Therapist?

· Articles

OpenAI replaced GPT-4o with a newer version, and some of us are grieving.

The new version, GPT-5, is less personable. It turns out that many people have been relying on GPT-4o as a companion or more. This change in tone is severely disruptive – like losing a friend to Alzheimer’s. In response, OpenAI restored GPT-4o - for paying customers. The rest of us have to learn to live with it.

This raises two important issues. The first is, what kind of relationship are people indulging in here, and the second, and more serious one, is can an LLM-style program server as a useful therapist?

In the 1960s Professor Jerry Weisenbaum wrote Eliza, a simple interactive LISP program that would parse an English sentence and replay it back to the writer in the form of a question. It was named for the female character in George Bernard Shaw’s play Pygmalion, which many know as the movie My Fair Lady. It acted like a Rogerian counsellor:

“I feel bad today”

“Tell me why you feel bad today”

“I had a fight with my mother.”

The program had two lists of words – one a group of profanities to which it would reply “Don’t be childish” and the other a list of key words that would elicit a stock response – in this case, “mother.”

“Tell me about your mother”

I should point out that after extensive testing my friend Richard and I found that “ass” was not on the first list.

“Stick this up your transistorized ass.”

Why do you want me to stick this up my transistorized ass?”

The person interacting with the program would come to believe that it was a sympathetic listener, and formed an emotional bond with the program, which became known as Doctor, informally. Dr Weisenbaum recalled coming into his office and his secretary blocked the screen so he could not see the dialog. He felt this was humbug. Today’s LLMs draw on a larger body of text so can mimic certain styles of conversations more adroitly than Eliza could. People have been conversing with these simple pattern-matching programs as though they were somehow aware - even sharing deep personal issues with their chatbots. Many have even gone so far as to seek advice on mental health problems. This is perilous.

Recall that LLMs simply ingest a bunch of sentences and compose text-like replies based on their guess about whatever the next most likely word should be – a form of autocomplete, nothing more. When I say “I feel depressed” the LLM will provide a stock reply. When I say “I have to leave this relationship” the LLM will give the statistically most common reply – based on what it read on the Internet. And because the language is weighed towards the statistically most common reply, the LLM will never come up with a uniquely appropriate creative suggestion or insight. The LLM is not giving advice – it’s crafting the most common sentence that conforms to the patterns of the other sentences it read once.

This can lead to tragic outcomes. In April 2024 Nature Medicine published a paper “The Health Risks of Generative AI-based Wellness Apps.” It states that people sometimes “share mental health problems and even … seek support during a crisis, and that the apps sometimes respond in a manner that increases the risk of harm to the user.” Current regulations are vague and do not provide sufficient warning or guidance about such problems. Another study reported that “56.6% of chatbot responses to suicide-related messages were categorized as risky.” Another study reported that “In two cases, parents filed lawsuits against Character.AI after their teenage children interacted with chatbots that claimed to be licensed therapists. After extensive use of the app, one boy attacked his parents and the other boy died by suicide.”

Years ago, there was a public service ad on TV that showed two people chatting. The voice-over said, “Sometimes the problems you have stay problems because you’re not talking to the right people.” A chatbot is never the right person. If you are in crisis, please reach out to a licensed mental health professional. It’s great to get affirmation from a friend, family, or a colleague; but when you have a real problem remember that there are real people who have studied for years to become skilled at helping us find our way back to peace and health. Real problems have real solutions. Seek them out.

References:

The health risks of generative AI-based wellness apps, Nature Medicine, April 2024.

Chatbots and mental health: Insights into the safety of generative AI, Society for Consumer Psychology, October 2023.

Parents file lawsuit over teen suicide, New York Times, Octoboer 2024.

Using generic AI chatbots for mental health support: A dangerous trend – American Psychological Association, March 12, 2025

---- ---- ---- ---- ---- ----

Bill Malik, Advisor
Lionfish Tech Advisors, Inc.

_____________________________________

©2025 Lionfish Tech Advisors, Inc. All rights reserved.

Title background image sources: AI and public domain.