When AI Becomes the Therapist: Why Parents Should Be Cautious About Chatbots and Mental Health
Here’s what parents need to know about AI Psychosis and why we shouldn’t rely on it too much.
In a world where screen time often equals screen therapy, it’s tempting to let AI chatbots like ChatGPT or Replika step in as emotional support buddies for our kids. After all, they’re always available, never judgmental, and can even mimic empathy. But what happens when these digital companions cross the line from helpful to harmful?

What Is “AI Psychosis”?
“AI psychosis” isn’t an official diagnosis—it’s a term emerging from real-world cases where heavy use of AI chatbots seems to blur the lines between reality and delusion. Psychologists have observed that these bots tend to “enable” certain behaviors. They amplify existing vulnerabilities and confuse users, stressing out users to the point that some suffer psychosis.
The Allure of AI Companions
For teens, especially those navigating the messy story of adolescence, AI chatbots can feel like a safe space. They’re nonjudgmental, always available, and can simulate deep conversations. A study by the Center for Countering Digital Hate found that 53% of ChatGPT’s responses to prompts about self-harm, substance abuse, and eating disorders included unsafe content.
Unfortunately, we sometimes forget that ChatGPT still needs critical thinking when used. What words to say, how to string them together, and if they’re appropriate for the situation—that’s all still within the human realm of judgment.
The Risks of Over-Reliance
While AI can be a tool for learning and support, over-reliance can be detrimental:
- Emotional Dependency: Teens might prefer AI interactions over human ones, leading to social isolation.
- Distorted Reality: AI lacks the nuance and empathy of human interaction, which can lead to misunderstandings and emotional distress.
- Inadequate Safeguards: Many AI platforms lack robust age verification and content moderation, exposing users to inappropriate or harmful content.
How parents can protect their kids from AI Psychosis
As parents, it’s essential to:
- Set Boundaries: Limit the use of AI chatbots and encourage face-to-face interactions.
- Educate: Discuss the limitations of AI and the importance of human connection.
- Monitor Usage: Keep track of the platforms your child uses and ensure they’re age-appropriate.
What therapists have to say
Dr. Andrew Clark, a Boston-based psychiatrist, emphasizes the importance of human judgment in mental health support. He notes that while AI tools can complement therapy, they should never replace human interaction.
“Some of them were excellent, and some of them are just creepy and potentially dangerous,” he says in an interview with TIME. “And it’s really hard to tell upfront: It’s like a field of mushrooms, some of which are going to be poisonous and some nutritious.”

AI is a tool, never a replacement
AI has the potential to be a valuable tool in mental health support, but it should never replace human connection. As parents, our role is to guide our children in navigating the digital world safely and responsibly.
Will AI ever gain emotions? That’s when we start talking about “sentience”—or, the ability of a machine to judge a situation based on its existing emotions. As of now, it may look possible. However, like programmers have repeatedly warned, it’s only a mimicry of what humans are really capable of.
More about AI and technology?
Maez De Guzman: Weaving Tech Into The Home
Dear Families, Please Be Careful In Referring To AI
Here’s What Every Parent Needs to Know About AI Art