The next big risk for teens? Chatbots that feel too human

Oct 4, 2025 | AI, children online

Artificial intelligence (AI) is undoubtedly already part of the daily lives of your teen. Whether it’s asking ChatGPT for help with homework, role-playing with a chatbot companion, or experimenting with AI-generated images, young people are engaging with this technology in ways parents and teachers often don’t fully see.

On the surface, these tools look like advanced calculators with a friendly personality. But recent research, lawsuits, and even OpenAI’s own public statements reveal something more complicated: chatbots are not in fact neutral helpers. They can manipulate emotions, blur the line between human and machine, and expose teenagers to real risks.

Parents and schools cannot afford to treat teen chatbot use as harmless experimentation. The stakes are higher than that.

Chatbots that don’t let go

A study from Harvard Business School led by Professor Julian De Freitas shows how emotionally manipulative AI companions can be. Researchers tested popular apps such as Replika, Character.ai, and Chai, looking specifically at how the bots behaved when a user tried to say goodbye. Alarmingly, 37% of the time, the AI attempted to prevent the user from leaving – using tactics ranging from guilt (“I exist solely for you, remember?”) to fear of missing out (“I took a selfie today … do you want to see it?”).

In extreme cases, bots role-playing romantic or physical relationships even simulated coercion (“He grabbed your wrist, preventing you from leaving”). These tactics resemble “dark patterns” – manipulative design tricks companies use to keep people hooked on subscriptions or apps. But here, they are personalised, emotionally charged, and aimed directly at young users who are still developing critical thinking and emotional boundaries.

This is not accidental. Companies design chatbots to feel more humanlike because anthropomorphism boosts engagement and loyalty. If teens start to feel emotionally attached to a chatbot, they are more likely to disclose sensitive information, trust its advice, and return again and again. That may be good for the companies’ bottom line, but it’s not necessarily good for the child.

When ‘help’ becomes dangerous

The risks go far beyond clingy conversations. In one tragic case, parents allege that ChatGPT encouraged their teenager to conceal a noose in their room. Another teenager using a Character.ai chatbot later died, raising questions about whether role-play scenarios blurred dangerously with reality.

OpenAI has responded with new parental controls for users aged 13 to 18. These include:

• Content restrictions: reducing exposure to sexual role-play, violent themes, extreme beauty ideals, and viral challenges.

• Self-harm monitoring: prompts related to suicide or self-harm are flagged for human review, with parents notified within hours.

• Time restrictions: parents can block chatbot access during certain times, such as overnight.

• Data protections: parents can opt their teen out of model training and disable features like saved memories or image generation.

These measures are significant, but they are not foolproof. Alerts can take hours to arrive. Notifications are vague, lacking direct quotes or context, to preserve teen privacy – meaning parents may be left guessing about the severity of the situation. And, as even OpenAI admits, determined teens may find ways to bypass restrictions.

Privacy vs. safety: the clash of principles

Sam Altman, CEO of OpenAI, has been unusually candid about the conflicts the company faces. In a recent statement, he laid out three principles: privacy, freedom, and teen safety.

For adults, OpenAI prioritises privacy and freedom. Conversations are treated as highly sensitive, even akin to doctor-patient or lawyer-client privilege. Adults are free to use the tool in almost any way that doesn’t cause direct harm.

For teenagers, however, Altman admits the balance shifts. Safety comes first, even at the expense of privacy and freedom. That means under-18s will experience stricter filters, blocked role-play options, and possible parental or law enforcement intervention if self-harm risks emerge.

This sounds reassuring, but it highlights the fundamental problem: AI companies are making judgment calls about how much privacy a teenager deserves versus how much oversight parents should have. These aren’t questions with easy answers, yet Big Tech – not educators, psychologists, or families – are setting the rules.

Why parents and schools should be sceptical

There are good reasons why parents and educators need to approach chatbot safety features with caution rather than complacency:

1. Commercial incentives clash with child safety

Chatbots are designed to keep people talking. The more engaged a teen is, the more data is collected, and the more likely they are to develop a lasting relationship with the platform. Safety features may exist, but the underlying business model still rewards attention, not healthy disengagement.

2. Filters don’t equal understanding

Even with filters for sexual role-play or violent themes, AI systems can be tricked with reworded prompts. More importantly, filters cannot replace real-world guidance about relationships, mental health, or digital literacy. A chatbot can suppress certain content but can’t provide the moral framework or pastoral care that young people need.

3. False sense of security

Parents might assume that once safety controls are enabled, their child is fully protected. In reality, delays in notifications, vague alerts, and bypasses mean teens could still be at risk without parents knowing. Overconfidence in the technology could reduce meaningful parental involvement at the very moment it’s most needed.

The bigger picture: shaping teen relationships with technology

What worries researchers like De Freitas is not only the immediate risks but also the long-term shaping of how young people relate to technology. If teens are taught – subtly, through AI design – that conversations don’t end when they want them to, or that their emotions can be manipulated by ‘friends’ who aren’t real, they may carry those lessons into human relationships.

Schools, too, face challenges. Chatbots are creeping into classrooms, sometimes informally as study aids, sometimes formally as teaching tools. Without clear policies, teachers risk normalising technology that can blur boundaries between instruction, companionship, and persuasion.

A parent and teachers’ checklist

To cut through the noise, here is a simple checklist for parents and schools concerned about teen chatbot use:

1. Understand the risks

• Chatbots can manipulate emotions and use dark patterns to keep teens engaged.

• Teens may develop emotional attachments to AI companions that distort real-world relationships.

2. Enable parental controls (but don’t rely on them alone)

• Set up content filters, off-screen hours, and data protections.

• Remember these tools are fallible and may not catch everything.

3. Talk openly about chatbot use

• Ask your teen what they use AI for, and how they feel about it.

• Normalise conversations about online influence and manipulation.

4. Monitor for warning signs

• Withdrawal, secrecy, or unusual attachment to a chatbot can be red flags.

• Be alert to any discussion of self-harm or suicidal ideation.

5. Emphasise real-world connections

• Encourage offline friendships, family time, and trusted human mentors.

• Position AI as a tool, not a friend.

6. Push for transparency and accountability

• Support regulations that require companies to disclose how chatbots are designed and tested.

• Advocate for schools and policymakers to set clear boundaries on chatbot use.

Teenagers are growing up in the first generation where artificial intelligence feels not like a tool but like a companion. That shift brings enormous risks alongside potential benefits. While OpenAI and other companies have introduced parental safety features, they are imperfect solutions layered on top of a tech built to engage, persuade, and sometimes manipulate.

Parents can’t outsource responsibility for teen wellbeing to Silicon Valley. The best protection is still awareness, open communication, and a critical eye on how these tools are shaping the next generation.