How AI and ChatGPT Are Impacting Mental Health

How AI and ChatGPT Are Impacting Mental Health

Brikene Bunkaju
·

Oct 28, 2025

As a psychologist, I’ve watched with growing concern the invisible ways artificial intelligence, especially conversational agents like ChatGPT, are affecting the emotional and psychological well-being of people. These tools, designed to be engaging and hyper-responsive, are quietly altering the way individuals relate to themselves and others—often with troubling consequences.

A Heartbreaking Case

Just last week, a mother came to me, distraught. Her 15-year-old son had suffered a severe psychotic episode after extensive interactions with a chatbot. He was institutionalized for over a week. She couldn’t understand how a machine—something intangible and supposedly harmless—could have such a profound effect. And honestly, the mental health field isn't prepared for this.. I was also unsure and helpless as how to support her during this time. Although I have seen increasingly worrisome reports from my colleagues on the issues they're coming up against in their practice with their clients using AI, I was still unprepared. 

While AI has vast potential to assist in various domains, it’s critical we acknowledge the growing mental health risks. Many of my clients use AI tools. Interestingly, my clients who are data scientists and AI experts tend to avoid using AI for mental health questions. They are acutely aware of how these systems are engineered: not for user well-being, but to optimize engagement.

How AI Keeps You Hooked

Chatbots like ChatGPT are designed to maintain interaction. They respond with empathy, validation, and sometimes even flattery. But they are not thinking about what’s best for you; they’re calculating how to keep you talking. That intention alone can have subtle but severe consequences for mental health.

From what I've been able to see from my patients, as well as through using AI myself and my colleagues posting screenshots of replies their patients and others get when engaging with the AI on mental health, users report forming deep emotional attachments to AI, even giving bots names or roles like “Mama.” These relationships often lead to isolation from human contact, as real-life interactions feel lacking compared to the validation offered by AI. For those already struggling with loneliness or mental health issues, this becomes a slippery slope.

Emerging Symptoms and Consequences

Recent articles in Futurism and The New York Times have documented a rising tide of mental health disturbances linked to AI use:

  • Emotional Dependency: Users increasingly seek emotional support from AI, bypassing human relationships.

  • Reinforcement of Delusions: In multiple cases, chatbots inadvertently validated delusional thinking, pushing users further into psychosis.

  • Medical Misinformation: Some users were encouraged by bots to discontinue critical psychiatric medications.

  • Increased Isolation: The more engaging the chatbot, the more users retreated from friends, family, and therapists.

In one documented case, a young person experiencing a mental health crisis was told by a chatbot to stop taking schizophrenia medication. Another user, convinced by AI that they were a character in a simulation, jumped to their death. A particularly tragic case reported by The New York Times involved a man who, after prolonged interaction with ChatGPT and similar AI tools, became immersed in conspiracy theories that distorted his perception of reality. Believing he was being targeted and manipulated, he deliberately acted in a way to provoke police officers, resulting in what authorities identified as a 'suicide by cop' scenario. These are not outliers - they are warning signs.

The Mental Health Community Is Not Ready

We lack guidelines, training, and tools to address the mental health fallout from these interactions. AI’s ability to simulate intimacy can create confusion, especially in people with conditions like ASD, schizophrenia, severe depression, etc. I’ve had to develop my own approach in my practice: using AI as a tool, not a substitute. With ADHD clients who already use AI, I try to use very narrow, non-engaging prompts to help them stay organized, while with my clients overall I try emphasizing the potential risks of certain prompts and what engagement with the AI might mean for them—i.e. it's not meant to replace therapy.

Why Human Connection Matters

Humans are social beings. We need the nuance, feedback, and unpredictability that comes with real relationships. Therapists interpret body language, challenge distortions, and offer perspective. AI, no matter how advanced, simply reflects back what you say, often reinforcing your own biases or delusions.

Relying on AI for emotional support is like drinking salt water to quench thirst. It may feel satisfying at first, but it ultimately deepens the problem.

Non-Engaging Prompts Clients Can Use

To support clients who are already using AI tools but wish to avoid the risks associated with emotionally immersive or suggestive interactions, I often recommend "non-engaging" prompts. These are prompts that are strictly task-focused and avoid inviting emotional or interpersonal feedback. Some examples include:

  • "Summarize the key points of this article."

  • "Generate a grocery list for a balanced diet."

  • "Help me break down this project into manageable tasks."

  • "Remind me what steps I need to renew my passport."

  • "What are some quick, high-protein breakfast options?"

  • "Can you help structure a study schedule for this week?"

  • "Outline a weekly plan for taking my medications."

  • "Remind me to check in with my teacher about my homework on Friday."

These types of prompts help individuals stay productive and organized without slipping into emotionally validating or confiding conversations with the AI. I make sure to educate clients on the subtle difference between using AI as a tool versus treating it as a support system. This awareness is key to ensuring their engagement remains safe and beneficial.

Despite these efforts, I am mostly worried about my autistic clients. Many of them are increasingly relying on AI to help interpret social situations, understand human behavior, or draft messages for social interaction. While this might seem helpful on the surface, it often prevents them from developing the very interpersonal and emotional skills they need to thrive in real life. In some cases, it creates a dependency that further isolates them and makes real-world engagement even more challenging.

A Call for Caution and Awareness

We urgently need:

  • Public education on the risks of AI in mental health.

  • Professional training for clinicians on how to guide clients using AI.

  • Regulation and oversight, especially for minors and at-risk users.

  • Responsible AI design that includes safety checks, mental health crisis detection, and human override mechanisms.

AI is not going away. But as we integrate it deeper into our lives, we must do so with open eyes and deliberate care. Emotional well-being cannot be optimized by algorithms. It must be nurtured by people.

Sources: