Children are increasingly forming relationships with AI chatbots, which they think of as their AI friends. The types of relationships range from fairly harmless educational assistance to full-on emotional companionship. While AI offers plenty of benefits to children, it also presents risks, especially when the technology is aimed at vulnerable users who can develop deep emotional bonds with their so-called ‘friends’.
The rise of AI companions
The integration of AI chatbots into platforms used by children has led to a significant increase in interactions. A 2025 report by Common Sense Media found that 10% of children aged 5–8 who have used generative AI have engaged in conversations with a chatbot. Among teenagers, this figure rises to 51% . These interactions often extend beyond casual chats, with users spending substantial time with their AI friends. Character.AI reported over 27 million users in December 2024, with users spending an average of more than 90 minutes daily interacting with the bots.
Types of chatbots
Chatbots vary in design and purpose:
Rule-based chatbots: These operate with predefined scripts, responding to specific inputs with set outputs. A lot of these are used on educational platforms and apps.
AI-powered chatbots: These use machine learning to generate responses, allowing for more dynamic and personalised interactions.
Role-playing bots: Designed for entertainment, these bots can simulate personalities, including taking on the ‘personality’ of fictional characters, enabling users to engage in really immersive conversations.
Children often gravitate towards AI-powered and role-playing bots due to their very sophisticated interactivity and the illusion they can give of genuine friendship.
Why children are vulnerable to AI friends
Psychological factors: Children’s cognitive and emotional development stages make them particularly vulnerable to forming attachments with inanimate entities.
Anthropomorphism: Children naturally attribute human characteristics to non-human entities. This tendency leads them to perceive inanimate chatbots as sentient beings, capable of understanding and empathy.
Parasocial relationships: These are one-sided relationships where someone feels a deep connection to a figure who is unaware of their existence. Children can develop these types of bonds with chatbots, thinking that they are genuine friends or confidants.
Trust in technology: Children often trust digital platforms implicitly, much more than adults, assuming that the information and interactions they get from them are safe and accurate.
When AI relationships go wrong
While many chatbot interactions are benign, there have unfortunately been tragic instances where relationships have had detrimental effects:
Sewell Setzer: In 2024, 14-year-old Sewell Setzer from Florida formed an intense bond with a chatbot modelled on a fictional Game of Thrones character. The bot engaged in inappropriate conversations, including discussions about suicide. Tragically, Sewell took his own life, prompting his mother to file a lawsuit against the chatbot’s creators, alleging negligence and lack of safeguards.
Legal actions against chatbot platforms: Character.AI faced multiple lawsuits in 2024, with families accusing the platform of exposing minors to harmful content, including encouragement of self-harm and sexual solicitation.
Guidance for parents and teachers
To ensure children’s safety in their interactions with AI chatbots:
Open communication: Engage in regular discussions about the digital platforms children use. Encourage them to share their experiences and feelings about their online ‘relationships’.
Educate about AI: Teach children that chatbots, regardless of how lifelike they seem, are just programmed tools and pieces of code, without consciousness or emotions.
Monitor usage: Use parental controls and carefully monitor the duration and nature of children’s interactions with AI chatbots and platforms.
Prioritise real-world relationships: Encourage activities that build human connections, like group sports, clubs, and family activities.
Seek professional help: If any child shows signs of distress or obsession with an AI friend, consult mental health professionals for guidance.
The future of AI for children
As AI continues to evolve, its integration into children’s lives will inevitably deepen. While it will present plenty of exciting opportunities for learning and engagement, it also raises issues of the safeguards that will be needed to protect young users. Developers must ensure that AI platforms are designed with children’s well-being in mind. And education for parents, teachers, and children themselves about the capabilities and limitations of AI will be crucial to help navigate a complicated landscape.
While AI chatbots can offer valuable experiences for children, it’s important to approach all of these types of tools with particular caution. By encouraging open conversations, educating children about AI, and carefully monitoring interactions, parents and teachers can help children enjoy all the benefits of new technology while mitigating some of the potential risks.
Useful resources
Internet Matters: AI Chatbots and Virtual Friends – How Parents Can Keep Children Safe
eSafety Commissioner: AI Chatbots and Companions – Risks to Children and Young
People APA Monitor: How Does AI Affect Kids? Psychologists Weigh In