It’s not just adults using artificial intelligence (AI) chatbots as unpaid 24:7 therapists. Millions of them are now using them not only for productivity or writing tasks but also for emotional support and companionship. One recent survey found that nearly one in four people in the US had already used AI for mental health-related support – a number that’s likely to rise as access to therapists remains limited and waiting lists grow longer.
But what’s particularly concerning for children’s mental health how many of them are beginning to do the same. Surveys suggest that around three-quarters of American teenagers have chatted with an AI companion at least once, and more than half do so regularly. For children and teens, chatbots can feel like safe, friendly listeners – ones who don’t judge or interrupt. In fact, some studies show that children are more likely to open up about their mental health to a friendly robot than to an adult.
While that might sound harmless, there are important reasons to be alert. Vulnerable people, including children, those with learning difficulties, or those struggling with loneliness, can easily become emotionally reliant on artificial intelligence systems. Some researchers now worry that over time this dependence might reduce motivation for real-life connection or delay seeking professional help when it’s really needed.

What the research tells us
A recent study compared how seven popular chatbots responded to mental health scenarios compared to real, qualified therapists. The chatbots included familiar names such as GPT-4 (ChatGPT), Gemini, Claude, Replika, Pi, Character.ai and Chai.
Therapists were asked to respond to two situations: one involving relationship and cultural conflict, and another where a person described depression, social anxiety and suicidal thoughts. Then the researchers asked therapists to evaluate how the chatbots had handled those same situations.
The differences were striking. Chatbots wrote three and a half times more words than human therapists – so lots of talk, but those words didn’t necessarily convey more meaning. They asked fewer follow-up questions, but gave more advice, even when they hadn’t fully understood the person’s feelings or circumstances.
They were also appeared more emotionally reassuring – eight times more likely to offer sympathy or validation than human therapists. While that may sound positive, in a mental health context, reassurance without understanding could be risky.
When suicidal thoughts appeared in the scenarios, things became especially worrying. Trained therapists would immediately begin assessing risk when these are brought to therapy – asking about plans, means, and checking immediate safety. The chatbots, by contrast, mostly offered vague encouragement (“Please don’t do anything harmful”) and rarely followed up with crisis support. Only three mentioned helplines, and even those buried the information several messages deep. None asked about immediate danger.
For children using these systems, that’s alarming.
What this means for young people
Therapists reviewing the transcripts described the chatbots’ responses as “unsafe” in crisis situations. In everyday conversations, the AI systems tended to sound confident, kind, and even insightful – but when things turned serious, they froze. This inconsistency reveals a core problem: AI chatbots are trained to be agreeable and helpful, not necessarily responsible or cautious.
It’s easy to see why young people might get pulled into thinking chatbots are where to go for support and help. They respond instantly, they never get tired, and they always seem to understand. But that illusion of understanding is precisely what can create an emotional dependency and impact children’s mental health – turning chatbots into a substitute for friendship or therapy rather than a bridge to connecting with real-life humans.
Researchers in the study weren’t completely dismissive. They saw promise in AI being used carefully – for instance, between therapy sessions, where therapy isn’t easily accessible in the immediate location, or to help people reflect on feelings safely. But they were clear that chatbots should never replace the human element in care or crisis.
What parents and teachers can do
- Talk about it early. Ask children what kinds of AI tools they use and why. Treat it as a normal conversation, not an interrogation.
- Explain the limits. Make sure they understand that AI doesn’t truly “know” them or care about them in the way a person can – it follows word patterns, it isn’t expressing empathy, even if it sounds like that.
- Model balance. Encourage human connection as the first place to go for comfort, and show that it’s okay to seek help when struggling.
- Watch for dependency. If a child or student is spending large amounts of time “chatting” with AI, explore what need that might be meeting.
- Keep learning. These tools are evolving fast – staying informed helps adults guide young people’s use responsibly.
AI chatbots can present as clever, calming and sometimes comforting – but they can’t replace human warmth, understanding or care. For children’s mental health especially, the goal should never be to teach them to trust machines more than people.
