Children and AI – a new Code 

Apr 28, 2025 | AI

Too often, children’s needs and rights are afterthoughts in the development of AI systems. But, in March 2025, the 5Rights Foundation launched the Children & AI Design Code, a comprehensive protocol aimed at ensuring AI systems respect and protect the rights, safety and well-being of children.

The code is not just a recommendation – it’s a strategic framework for future-proofing services, technology procurement, and governance.

Why a code for children and AI?

Children make up around 30% of the global population and are frequently early adopters of new technologies – yet most AI systems are developed without considering their unique developmental stages, rights, and vulnerabilities.

And AI is increasingly shaping how children are assessed, taught, advertised to, and policed online. From chatbots and recommendation systems to facial recognition and predictive analytics, AI can be both beneficial and harmful.

In the absence of enforceable safeguards, AI systems may:

  • expose children to inappropriate content,
  • undermine privacy and autonomy,
  • profile or discriminate unfairly,
  • or reinforce harmful social dynamics.

The 5Rights Foundation argues that while much of the global regulatory effort around AI is broad or adult-centric, what’s missing is a practical, actionable protocol to ensure children are protected – and respected – by design.

A high-level view of the code

The Children & AI Design Code is intended for anyone designing, procuring, deploying, or regulating AI systems that may affect children. It complements existing frameworks like the EU AI Act, the UNCRC, and national digital safety legislation, but goes further in one key way: it operationalises ethical principles into concrete actions.

The code is structured across six parts:

1. Context – why the code is needed, who it’s for, and how it integrates with existing legislation.

2. Key considerations – how to factor in risk, rights, diversity, and the full AI system lifecycle.

3. Criteria – nine standards that AI systems affecting children must meet.

4. Risks – common harms and vulnerabilities AI systems may create or amplify.

5. The code in action – practical guidance for each lifecycle stage, from planning to retirement.

6. Supporting tools – checklists, development models, and case studies.

Nine criteria for AI systems that impact children

At the heart of the code are nine non-negotiable standards AI systems must meet when they impact children:

1. Developmentally appropriate – reflects children’s evolving capacities at different ages.

2. Lawful – complies with all applicable laws, especially around children’s rights.

3. Safe – does not create or amplify risks to physical, mental, or emotional well-being.

4. Fair – treats all children equitably and avoids discriminatory outcomes.

5. Reliable – operates as intended, even under stress or in edge cases.

6. Redressable – enables children to report harm and access remedies.

7. Transparent – clearly communicates how decisions are made and why.

8. Accountable – includes named individuals and organisations responsible at each stage.

9. Rights-respecting – upholds the UN Convention on the Rights of the Child (UNCRC), including rights to privacy, participation, protection, and development.

These criteria are not guidelines – they are minimum standards. If an AI system cannot meet them the Code says it should not be built, or must be significantly redesigned to avoid harming children.

How AI systems impact children

A system is deemed to impact children if:

  • it uses data involving children,
  • it shapes their experience of a service,
  • it’s likely to be used by children,
  • it affects decisions made about children (e.g. eligibility for school support or medical care), or
  • its outputs – directly or indirectly – affect children’s safety, well-being, or rights.

If any of these apply, the code becomes fully applicable, regardless of sector or organisation size.

Real-world risks the Code helps address

The Code outlines common risks AI systems may pose to children:

  • Unfairness: biased predictions in education, healthcare, or welfare services.
  • Harmful content: generated or amplified by recommendation algorithms.
  • Privacy violations: especially with surveillance tech or real-time behavioural tracking.
  • Security vulnerabilities: such as deepfakes or manipulated systems targeting children.
  • Attention capture: addictive design patterns that exploit developmental traits.
  • Social shaping: when AI influences children’s relationships, behaviour or self-image.

These are not hypothetical risks – they are real, measurable, and already occurring.

What this means

Whether you lead a health service, a school system, a tech firm or a city council, this Code is an early signal of where regulation and public expectation on children and AI are headed. If you are involved in any way with children’s use of AI, here’s what you need to consider now:

1. Integrate AI ethics into procurement and R&D

Use the Code as a baseline for assessing third-party systems or for shaping your own AI development. Build it into your vendor and EdTech due diligence and technology risk assessments.

2. Assign responsibility

Ensure there is a named, empowered executive accountable for how your organisation uses AI with respect to children. This sends a strong governance signal both internally and externally.

3. Conduct child impact assessments

Use the Code’s checklist and criteria to embed child-centred design in product planning. These assessments should be as normalised as environmental or cybersecurity reviews.

4. Upskill your teams

Ensure key departments -AI/tech, legal, safeguarding, data protection – understand children’s rights and how to apply them to AI.

5. Engage with stakeholders, including children

Designing in the best interests of children requires actually talking to them. Build this into your UX research, citizen engagement or service co-design.

6. Prepare for future regulation

The Children & AI Design Code is likely to influence statutory frameworks globally. Organisations that pre-empt these requirements will avoid disruption and earn public trust.

What the Code represents is not simply child protection – it reframes AI development as a shared responsibility. The digital world children are growing up in is not neutral. It is designed. And this design must respect the rights, capacities, and futures of its youngest users.