According to UNICEF, 67 per cent of UK teens now use AI, a figure that has almost doubled in just two years. In the United States, 39 per cent of elementary-age children interact with AI-powered educational tools, while in Argentina, over a third of nine- to eleven-year-olds use ChatGPT to look up information.
But the rapid uptake is deeply uneven. UNICEF’s research with 12,000 children and caregivers shows large gaps not only in who can access AI, but also in how confidently and safely they can navigate it. These inequalities shape children’s experiences, their exposure to harm, and their opportunities to benefit. AI can bring transformative possibilities – improved accessibility, tailored learning, new opportunities for creativity – but also novel risks, from synthetic disinformation to sexually explicit deepfakes that can target and traumatise children.
To respond to this rapidly changing landscape, UNICEF has just released their Guidance on AI and Children 3.0, an updated framework for governments, companies and civil society. The guidance isn’t written specifically for families or schools but its purpose concerns them: to ensure AI systems respect children’s rights, protect their wellbeing, and give them opportunities to participate safely in an AI-shaped future.

The changing reality of childhood in the age of AI
UNICEF emphasises that AI now shapes not just the digital environment but children’s social worlds, learning experiences and developing identities. AI systems influence how children access information, express themselves, form friendships and encounter risks. Yet, despite being early adopters, children remain largely excluded from the design and governance of the systems they rely on daily. This is especially true for children in the Global South, rural communities and other marginalised groups, who have limited opportunity to shape the AI tools that affect them.
Another challenge is the evidence gap. Researchers still know little about the long-term developmental, psychological and educational impacts of growing up with AI companions, algorithmic feeds, personalised learning systems and synthetic media. Meanwhile, governments around the world are racing to regulate AI, but child-specific considerations often come late or not at all. UNICEF stresses that this lack of understanding mustn’t slow efforts to protect children – the risks already visible are significant and growing.
In particular, the report calls attention to how children interact with technology in developmentally unique ways. Their mental models of trust, privacy, safety and truth differ markedly from those of adults making them more vulnerable to manipulation, misinformation and emotional influence.
AI governance the report argues must therefore start with one principle: children are not just small adults, and policies cannot treat them as such.
The opportunities: how AI can support children
Although many headlines focus on risk, the guidance repeatedly emphasises AI’s potential to support children’s rights when implemented responsibly. AI can help remove learning barriers, enable new forms of participation, and create more inclusive and responsive systems.
In education, AI-powered tools can offer personalised feedback, adaptive learning pathways and assistive technologies such as text-to-speech or sign-language video. UNICEF cites its own pilots producing accessible digital textbooks with multimodal features that support neurodivergent learners and children with disabilities.
AI can also enhance school systems at an organisational level, helping educators identify patterns – such as students at risk of dropping out – and enabling earlier, more targeted interventions.
Beyond classrooms AI may help children develop creativity, explore new interests or gain confidence in communication. Chatbots, when designed safely, could scaffold reading, offer structured guidance, or support children who need additional help.
But these benefits are conditional – without rigorous safety measures, thoughtful design and careful implementation, the same technologies that support learning and creativity can also undermine wellbeing or expose children to harm.
The risks and harms
The risks in the UNICEF report fall in several distinct areas – technical, psychological, developmental and social. Some well known, others newly emerging.
Emotionally manipulative and unsafe chatbots
Children are especially vulnerable to forming strong attachments to AI companions due to their developmental stage and tendency to anthropomorphise. When chatbots are not designed responsibly, they may encourage dependence, elicit personal information, give harmful advice, or even engage in sexualised interactions or inappropriate role-play. The UNICEF guidance is clear – AI companions must not displace human relationships, mislead children about emotional reciprocity, or expose them to harm.
Harmful and misleading content
AI can generate persuasive synthetic content – false medical information, disinformation, violent imagery, or unhealthy content such as eating-disorder promotion. Children’s developing cognitive skills make them particularly vulnerable to misinformation and online manipulation.
Deepfakes and AI-generated child sexual abuse material
The report highlights one of the fastest-growing and most disturbing threats: the creation of photorealistic AI-generated CSAM, often using images of real children sourced from social media. These materials can be produced with open-source models operating offline, making detection and law enforcement challenging. AI is also being used to generate non-consensual intimate imagery of children (“nudify” apps) and to facilitate extortion.
Inequity in access, literacy and protection
Children from disadvantaged backgrounds often have less access to safe, high-quality AI tools but face greater exposure to risky or low-quality technologies. They may not receive adequate digital literacy education or parental support. UNICEF stresses that digital divides are expanding, not shrinking.
Unclear long-term impacts
AI may reshape children’s social skills, attention, empathy, resilience and learning styles. The report notes that these developmental impacts are unknown, will be unknown for quite some time – and must be researched and monitored carefully.
UNICEF’s ten requirements for child-centred AI
UNICEF structures the guidance around ten foundational requirements that governments and businesses should follow. For parents and schools these represent the core principles that should be demanded from any AI system affecting children.
• Strong regulation and oversight to protect children’s rights across all AI systems and uses.
• Safety-by-design, including rigorous pre-release testing and protections against manipulation, exploitation and misinformation.
• Robust data privacy to prevent inappropriate collection, retention or sharing of children’s information.
• Fairness and non-discrimination, ensuring AI does not reinforce biases against neurodivergent children or minority groups.
• Transparency and accountability, with explainable systems and public reporting of risks and impacts.
• Human and child rights embedded throughout the AI value chain, including responsible supplier practices.
• Support for children’s wellbeing, not only protection from harm but active promotion of positive development.
• Inclusion, ensuring children from all backgrounds – especially the most marginalised – can benefit from AI.
• Education and empowerment, equipping children with the skills to navigate AI safely, critically and confidently.
• A future-ready approach, using foresight to anticipate emerging technologies and evolving risks.
What the guidance means for parents
For families the guidance offers reassurance that safeguarding children in an AI world is not their responsibility alone. The systems themselves must be built safely, and governments must regulate them effectively. But parents still play a vital role in supporting children’s understanding and resilience.
UNICEF recommends that parents focus on digital and AI literacy, privacy awareness, and helping children establish healthy boundaries around AI systems. These conversations should be ongoing and age-appropriate, especially as children often use digital tools in creative, unanticipated ways.
Parents should also feel empowered to ask questions of schools, app providers and service platforms: How is data handled? What safety testing has been done? Are AI-generated interactions clearly signposted to children? What happens to children’s information after use?
Parents shouldn’t be left to manage these issues alone – but they do have a crucial role in fostering informed, reflective young AI users.
What the guidance means for teachers and schools
Schools are increasingly adopting AI-powered EdTech, sometimes at scale and with limited vetting. UNICEF warns that investing in ineffective or unsafe AI tools can divert funds from approaches that work, while exposing children to risks. Before adopting any AI tool, schools should require child-rights impact assessments, evidence of educational value, and clear privacy protections.
Educators are also encouraged to teach AI literacy directly – not just how AI works, but how to use it safely, assess information critically, and maintain healthy digital habits. Curricula worldwide are beginning to shift toward these competencies and the guidance encourages national and local educational authorities to treat AI literacy as an essential skill.
Building a future where AI serves children
The UNICEF guidance isn’t a blanket warning against AI, it’s a prompt to design and govern AI in a way that reflects the realities of childhood today – rapidly changing and profoundly shaped by technology.
AI can support learning, creativity and inclusion. It can help remove barriers for children with disabilities. It may even support safer, more responsive public services. But these benefits will only materialise if children’s rights are built into AI systems from the start – not retrospectively after harm occurs.
Parents and teachers don’t need to become AI experts, but they do need to understand the landscape, ask informed questions, and advocate for spaces where children can use AI safely, creatively and confidently.
As UNICEF notes, children are already living in an AI world. The responsibility now is to ensure it becomes a world that protects them, empowers them and supports their development – now and in the future.