Is the Online Safety Act already out of date?

Jul 13, 2025 | Internet Safety

On 25 July 2025, age verification provisions of the UK’s long-awaited Online Safety Act will come fully into force, placing legally binding duties on tech companies to protect children and other users online. For schools and parents the law is a landmark: an attempt to make the internet safer in response to years of campaigning by bereaved families and child-protection charities.

But while the Act promises tougher rules, big fines, and clearer accountability for social media companies, it has also sparked fierce debate. Critics fear it is already outdated, doesn’t go far enough in protecting children, or risks undermining privacy and free speech.

How the Online Safety Act came into being

Over the last decade, Britain witnessed a series of tragedies that exposed risks of life online for children. Among them the death of Molly Russell, who died by suicide in 2017 at 14 after viewing graphic self-harm content online. Her father Ian has since become one of the most prominent campaigners for reform.

And he is far from alone. Other bereaved parents – including Esther Ghey, Hollie Dance and Lisa Kenevan – have spoken publicly about how they believe the internet played a part in their children’s deaths. Their stories, and their advocacy, helped push online safety up the political agenda in a way that was impossible for ministers to ignore.

Politicians faced a dilemma: how to balance the freedom of the internet with the need to protect the most vulnerable? How to make sure giant global tech companies – some richer than countries – were answerable to UK law?

The result is the Online Safety Act, passed after years of consultation, parliamentary scrutiny and delay. Its supporters say it marks the end of a “wild west” era in which companies faced little meaningful pressure to consider user safety.

What the Act does

The Act is complex, with different sections coming into force at different times, but its core goals are relatively clear. It introduces a statutory duty of care for companies whose services allow user-generated content or search.

Platforms must assess the risks posed by their services, take steps to mitigate those risks, remove illegal content quickly, and protect children from harmful or age-inappropriate material.

Crucially, from 25 July 2025, Ofcom’s new child safety rules will require platforms to verify users’ ages before giving them access to adult content such as pornography. Ofcom will not tell companies exactly how to do this, leaving room for approaches from facial scans to payment-card checks – but the expectation is clear: children must be kept away from material that could harm them.

If companies fail to comply, they face the threat of huge fines of up to £18 million or 10% of global turnover (whichever is higher), blocks to their service in the UK, and potential criminal liability for senior executives in cases of repeated non-compliance.

The law also contains obligations for companies to tackle illegal content such as child sexual abuse material, terrorism, and fraud. Some of these duties are already in effect.

The road to get to the Online Safety Act

It has taken years of campaigning, debate and lawmaking to get to this point.

Early versions of the Online Safety Bill were published in 2021, following recommendations from parliamentary committees and charities who argued for a new approach to regulating online spaces.

Policymakers looked at other frameworks, like the EU’s Digital Services Act, but wanted to go further on child safety. There were extended debates about scope and detail. Should the law tackle “legal but harmful” content for adults? Should encrypted services be forced to scan private messages? Should it cover disinformation and hate speech?

Fierce disagreements meant the final Act was narrower in some areas than early campaigners hoped. Notably, it doesn’t force platforms to police “harmful but legal” content for adults – an omission that critics say leaves major gaps in protection.

Meanwhile, the government’s own officials acknowledge that even as the law has been debated, technology has raced ahead. Campaigners for online safety argue that the artificial intelligence boom of the past two years has introduced an entirely new set of risks which were not anticipated. Features like AI chatbots and algorithmic recommendation systems are now integral to many platforms but only partly addressed in the Act.

Still, after extensive parliamentary debate and multiple revisions, the Online Safety Bill received Royal Assent in October 2023. Since then, Ofcom has been busy drafting and consulting on detailed codes of practice that companies will be expected to follow.

Hopes for the Act

Supporters of the Act argue that it is nothing short of a historic shift in how online spaces are governed.

After two decades in which the largest social media companies often prioritised growth, engagement and profit over safety, they will now face real, enforceable obligations to consider how their design choices and policies affect children.

For parents and schools, the promise is that companies will finally have to think about safety by design: building in age checks, stricter content moderation, and more transparent reporting systems.

Politicians point to the heavy fines as evidence that this is not just symbolic. Companies will have real incentives to get their houses in order or risk being shut out of the UK market altogether.

Child-safety advocates including the NSPCC, the Samaritans and the Molly Rose Foundation have welcomed the Act’s core premise: that the burden of protection should no longer fall entirely on parents and teachers but must be shared by the tech firms profiting from children’s attention.

As one Whitehall source put it, “We’ve had 20 years with no attention being paid to safety. You can’t say that now.”

Criticisms and doubts

But optimism is not universal. Critics of the Online Safety Act span a wide spectrum, from child-protection campaigners to privacy activists and tech companies themselves.

Some safety campaigners argue the rules simply don’t go far enough. They point out that while age verification will help keep younger users away from pornography, the Act doesn’t cover what children share with each other on messaging apps. It also doesn’t regulate in-app purchases or features in games like loot boxes that can exploit children and their families financially.

Dangerous online challenges, risky stunts and algorithm-driven feeds that normalise eating disorders or self-harm can still circulate widely with limited controls.

Moreover, many believe Ofcom has been too cautious in drafting its final codes of practice. The Children’s Commissioner for England, Rachel de Souza, has accused the regulator of prioritising the interests of technology companies over children’s safety.

The law has also drawn fire for failing to address misinformation or hate speech in a comprehensive way. A recent parliamentary committee investigating the Southport riots warned that the UK’s approach still leaves users vulnerable to large volumes of misleading and harmful content that can damage mental health, fuel violence and undermine democracy.

Civil liberties groups have their own worries. By empowering a regulator to oversee speech online, some fear the UK is opening the door to censorship or government overreach. The question of how to balance safety and freedom of expression remains contentious, with no easy answers.

Privacy advocates are particularly alarmed at proposals (still under consideration) to scan end-to-end encrypted messages for child sexual abuse material. Companies like Apple, WhatsApp and Signal argue this would fundamentally weaken privacy and security for all users, setting dangerous precedents for surveillance.

Ofcom’s crucial role

From 25 July, Ofcom will formally begin its role enforcing the child safety responsibilities in the Online Safety Act. This is a major expansion of its powers, moving the UK from a largely self-regulated model to one where an independent statutory body can impose fines, investigate companies and, in extreme cases, force platforms to be blocked in the UK.

Ofcom has spent months consulting on codes of practice that set out what companies need to do to comply. While the regulator won’t mandate specific technologies for age verification, it will expect firms to show they are genuinely stopping under-18s from accessing adult content.

There is also a wider set of safety requirements: removing illegal material quickly, conducting risk assessments, and offering clear ways for users to report harmful content. Platforms will need to appoint senior managers who are accountable for child safety.

Already, Ofcom has opened investigations into companies suspected of breaching the Act. These early enforcement moves are intended to show that the new regime has real teeth, not just symbolic penalties.

But critics warn that even with strong legal powers, Ofcom will face huge challenges in monitoring and regulating the global tech giants effectively. It will need to balance enforcement with technical expertise, diplomatic pressure, and an understanding of fast-moving trends in the online world.

What does all this mean in practice?

  • From 25 July, companies must prevent children from seeing harmful adult content by checking users’ ages.
  • Platforms will be required to remove illegal content quickly and minimise exposure to self-harm, suicide and eating disorder material in children’s feeds.
  • Age verification measures could include facial scans, payment details or other technology, sparking debates about privacy and access.
  • Ofcom will be able to fine companies, force changes and even prosecute senior managers if firms don’t comply.
  • Some risks, like content on private messaging apps or AI-driven recommendations, are only partly addressed and will likely need future regulation.

Even as the Online Safety Act comes into force, ministers are already thinking about the next round of reforms.

The government has indicated it wants to move beyond just restricting what children can see, and look at how they use the internet. Cabinet minister Peter Kyle has spoken about developing “healthy habits” online, recognising that some apps are deliberately designed to be addictive or foster compulsive behaviour.

Proposals under discussion include “app caps” and screen time limits, extra rules on live streaming, and clearer age distinctions between what 13 and 16-year-olds can access.

But for now, these ideas remain in development, and new legislation will be needed to make them reality. That means the debate about online safety is far from over.

The Online Safety Act represents the UK’s most ambitious effort yet to hold tech companies accountable for protecting children and other users. It is the product of real tragedy, tireless campaigning and tough policy trade-offs.

It promises real change – forcing platforms to take child safety seriously in a way they have not been compelled to before. But its limitations are clear, and even its supporters acknowledge it will need to evolve as technology develops.

For parents, carers, teachers and safeguarding professionals, this new law is a critical tool – but not the whole answer. Education, conversation, and critical thinking will remain essential to help young people navigate a complex, enticing, and sometimes dangerous online world.