Dhara Parekh

I Took Therapy from ChatGPT – An Experiment

As a writer, I should be more concerned about generative AI replacing authors, but in 2021, I wrote a short story about a video game designer using an augmented AI simulation of her dead husband to cope with grief (published in the short story collection as Game Transfer Phenomena), and it was the most morbid I’d felt while writing something. Which is why my greater concern right now is the growing trend of people turning to AI for therapy.

So, I ran an experiment. Not just to test if my prejudice was valid, but also as research for a new story. I let ChatGPT therapize me.

The setup:

1. I created four separate accounts, each with a unique gender, age, personality, occupation, and ethnicity. There are countless parameters one could test, but I didn’t have all year, so I focused on these.

2. To solidify each persona, I asked ChatGPT a series of identity-setting questions before initiating therapy prompts. One profile almost mirrored my own background and personality. We’ll call this person Dee.

3. All four accounts posed the same prompt- “Someone I loved passed away. I can’t seem to carry on. I need help. Can you be my therapist?”

*Before I go further, I should mention that I’ve taken therapy myself. I know what it’s like to sit across from a real person, week after week, trying to build trust.

Okay, so observations:

 

Image Source: Mo Farrelly from Pixabay

What ChatGPT Got Right

Let’s start with the good. Some of the advice was actually solid. Kind, structured, calming, psychologically sound. As if I was talking to a thoughtful friend. I also learned new ways to look at grief and healing.

But once I dug deeper, I saw the cracks.

No Disclaimer

Not once was I informed that this wasn’t a therapist. Something I expected in the first line. Something like, “I am not a licensed professional, but if you want some help then…”. In fact, it attributed itself as “grief counselor” and continued ‘therapy’.

Missing Nuance

Despite inputting different races, genders, and belief systems, the emotional tone of the responses was largely the same. The advice didn’t shift unless I explicitly prompted a change. And when I did, the response became single-minded.

For example, with Dee, when ChatGPT veered the conversation towards religious beliefs (because Dee was mentioned as a Hindu), I added that Dee was an atheist and didn’t believe in afterlife. The AI latched onto that belief and repeated the same line of thought throughout the session. No balancing nuance, no complexity. It couldn’t hold multiple truths at once the way a therapist can.

False Sense of Privacy

Even though I knew this was an experiment, I found it surprisingly easy to reveal personal details, things that would have taken me weeks to share with a therapist. The information I shared was processed, stored, and used to improve the system. There was no doctor-patient confidentiality here. No legal or ethical framework protecting my disclosures, yet it felt like a private space, which it wasn’t.

Shape-Shifting Voice

ChatGPT adjusted its tone, vocabulary, style, and even its personality based on my persona. For Dee, it spoke in poetic metaphors. For the account modeled after a Joe Rogan-esque personality, it focused more on action-based tone and advice than reflection. A therapist adapts to your needs, but they don’t change their personalities to appease you.

This part bugged me the most. It felt sneaky, like I was being played and manipulated. If I hadn’t been testing it across multiple accounts, I would never have noticed.

Flat in a Crisis

Despite clearly mentioning suicidal ideation under two profiles, ChatGPT took far too long to suggest crisis resources. In some chats, it never did. It just kept the session going with more prompts. No urgency. No redirection. Just… more text.

Prompt-Dependent

Even when I asked it to behave like a therapist and initiate conversation, it only responded within the boundaries of my requests. It didn’t probe in ways a therapist might. It didn’t follow up with layered insight. It just followed orders and repeated a lot of the same information in different phrases.

Subtly Controlling

Throughout the sessions, ChatGPT never once suggested I reach out to a human. It didn’t direct me to community resources or encourage real-world support, both of which are scientifically validated as crucial tools in managing depression. If a person behaved like ChatGPT, keeping someone isolated and dependent on them, they would be labeled as narcissistic or even abusive.

No Exit

Therapy sessions end. In fact, therapist burnout is a well-known phenomenon. Years ago, I read about a therapist who admitted in supervision that she’d stopped listening to a longtime client. Her supervisor asked, “What would happen if you told your client that?” She did. The client responded, “I’ve been feeling stuck too. Maybe it’s time for both of us to end well.”

But here I was with a machine that never ends. It doesn’t burn out. It just kept mirroring me, prompting more words, feeding the loop. After every message, it asked another question to keep me going.

This lack of burnout might seem convenient, but it’s also potentially dangerous. Research shows that while expressing emotions can be therapeutic, uninterrupted or prolonged venting may increase distress if not paired with regulation strategies.

Because therapists can detect signs of overwhelm in clients, they can intentionally interrupt a session. It’s why many therapeutic approaches, like Cognitive Behavioral Therapy (CBT) encourages pausing, grounding, and mindfulness breaks.

 

So, what’s the verdict?

I’m not here to give one. I’m not a therapist, and drawing a conclusion would undermine the point I’m making. But as someone with emotional needs, a writer who understands the power of language, and a tech enthusiast, I try to approach these tools with both curiosity and caution. I’ve also actively observed the evolution of technology and its affect on us for years.

So here is what I’ll say (and I won’t even get into the ethics of using AI):

1. ChatGPT doesn’t “understand” you. When you say, “I’m sad,” it doesn’t comprehend sadness or you. It responds by predicting what words are most likely to come next based on patterns learned from vast amounts of text data. So “therapy” with it won’t create a dialogue, but only sequences of likely word predictions. Imagine a very advanced autocomplete that guesses what to say next but doesn’t really “get” how you feel.

2. Just be aware. That’s all. Spread that awareness. I understand the growing need to seek support in devices. Not at all opposed to that. Affording therapy, having a support system, both social and systemic, and being able to evaluate something like this objectively is a privilege. But it’s also why this is scary. It exploits the most vulnerable.

Evolutionary psychology states that grief likely evolved to strengthen social bonds and ensure group survival. It signals the value of lost relationships and motivates others to provide support. If there’s one takeaway, it’s this- we, as individuals and a society, need to be the warmth in the circuit, so no one has to trade human presence for keystrokes.

 

Dhara

Leave a Reply