Safer Internet Day 2026 highlights AI risks to children

Feb 10, 2026 | AI, children online, Internet Safety

This week in the UK is Safer Internet Day – an annual reminder to pause and reflect on how safe the online world really is for children. The 2026 theme, Smart tech, safe choices – exploring the safe and responsible use of AI,” speaks directly to the reality families, schools and safeguarding professionals are already facing. Artificial intelligence isn’t an emerging or future issue for children and young people – it’s already embedded in the platforms they use daily. From search engines and homework helpers to creative apps, games and messaging tools, AI now sits quietly behind many of the digital experiences children navigate.

For teens in particular AI has quickly become a familiar presence. It can act as a study partner a source of entertainment, someone to confide in and increasingly a place to turn for advice. That shift should matter to parents, teachers and those responsible for online safety, because it changes where young people look for guidance and who – or what – they perceive as trustworthy.

Public debate often focuses on long-term questions about advanced AI systems and their risks, but today’s tools are already reshaping the risks children right now. These aren’t distant or hypothetical concerns – they are visible, measurable and, in several cases, growing faster than our safeguards can adapt.

Safer Internet Day and AI risks

The speed of adoption alone is striking. Ofcom’s most recent research on children’s media use found that around half of UK children aged 8 to 17 have already tried AI tools, with uptake highest among older teenagers. Even more concerning is the finding that just over half of teenage users say they would trust an AI-written news story as much as, or more than, one produced by a human journalist. For educators and parents working to build critical thinking skills, that statistic alone should prompt reflection about how authority and reliability are taught.

AI systems are engineered to sound confident and coherent, for adults that can be misleading but for children it can be far more dangerous. Young people are still developing judgement, emotional resilience and an understanding of expertise. When a chatbot delivers calm, well-structured advice, it can feel authoritative – especially when a teenager is anxious, isolated or unsure who else to ask.

This is most significant in areas where vulnerability is already high – mental health, relationships, sexuality, body image and self-harm. Reporting in 2025 suggested that roughly one in four 13–17-year-olds in England and Wales had already used AI chatbots for mental health support. Given long waiting lists for specialist services and stretched school resources that’s hardly surprising, given AI is always available, never rushed and never distracted.

But constant availability is not the same as support that is safe support. Research examining therapeutic and ‘AI companions‘ suggests some systems reinforce harmful thinking, fail to challenge suicidal ideation or create patterns of emotional dependency. When a trained professional makes a mistake, there are ethical standards and accountability processes – when an AI system does the lines of responsibility are much less clear. For safeguarding professionals, that ambiguity matters.

Safer Internet Day highlights that alongside these psychological risks sits a growing set of social harms that AI can intensify. Sextortion remains one of the fastest-growing crimes affecting young people in the UK. The National Crime Agency has warned that offenders increasingly target teenagers, coercing them into sending images or threatening exposure in order to demand money, with teen boys particularly affected.

Generative AI lowers the barrier for offenders further, they no longer need genuine images to threaten a child – they can fabricate explicit material, impersonate peers or simulate conversations and claim compromising content already exists. For a teenager facing that threat, whether the image is real or synthetic may make little difference – the fear and shame feels real.

Even more troubling is the rise of AI-generated sexual imagery involving minors. In February 2026, UNICEF called for the criminalisation of synthetic child sexual abuse material (CSAM), citing evidence that 1.2 million children across 11 countries had their images manipulated into explicit deepfakes in the previous year alone. In UK schools, campaigners report cases where classmates create and share sexualised deepfakes using easily available apps. The impact – reputational damage, trauma and disengagement from education – can be severe, even when everyone knows the image is actually fake.

Organisations tackling online abuse are also raising urgent warnings. The Internet Watch Foundation has documented how AI is already being misused to generate increasingly realistic synthetic abuse imagery. Even without an original photograph, such material still feeds exploitation networks, reinforces harmful behaviour and retraumatises victims. For professionals working in online protection, the distinction between real and synthetic harm is becoming less meaningful.

At the same time, children are growing up in an information environment where authenticity itself is increasingly uncertain. The UK government is working with technology companies to improve deepfake detection as synthetic media expands rapidly. Estimates suggest the number of deepfakes circulating online has grown from roughly 500,000 in 2023 to around eight million in 2025.

For educators and parents, this means supporting a generation learning to navigate a world where voices can be cloned, videos manipulated and evidence fabricated. Trust becomes harder to sustain – not only in media but in accusations, apologies and testimony. Cynicism can feel like a protective strategy, yet it risks leaving young people less willing to believe anyone.

So what should this year’s message about “smart tech, safe choices” mean in practice? It can’t mean placing the burden solely on children to navigate systems never designed with their needs in mind. Ofcom’s research already notes potentially risky uses of AI, including extended engagement with character chatbots and exposure to mature content. Children and young people will explore what is available so safety must be embedded in design, regulation and oversight long before they encounter the technology.

For those working across education, safeguarding and AI assurance, this moment calls for a shift. If our frameworks concentrate mainly on technical performance or bias metrics while overlooking foreseeable misuse and harm to children, they are incomplete. Child safety shouldn’t be a niche consideration in responsible AI – it’s a central systems issue that touches homes, classrooms and communities.

Safer Internet Day gives us all a useful focal point, but responsibility can’t end with awareness campaigns or social media posts. Everyone involved in shaping children’s digital environments – from developers and regulators, schools and families – has a role in ensuring AI systems are designed with young users in mind. That means addressing harms already emerging today – not debating hypothetical risks in the future.