The Australia social media ban – early results are in

4 weeks ago

The Australia social media ban – early results are in

Apr 13, 2026 | children online, Internet Safety

When Australian Prime Minister Anthony Albanese signed the world’s first national social media ban into law in November 2024, the ambition was clear.

“This is the day when Australian families are taking back power from these big tech companies,”

Anthony Albanese

The scale of the problem he was responding to was real enough – around 80% of children in Australia aged 8 to 12 were using at least one social media platform in 2024, rising to 95% of teenagers aged 13 to 15. A study found that seven in ten of these young users had encountered harmful content, including misogynistic and violent material, content promoting eating disorders and suicide – over half had experienced cyberbullying.

The legislation – an amendment to Australia’s Online Safety Act 2021 – introduced a mandatory minimum age of 16 for accounts on ten major platforms including YouTube, Instagram, TikTok, Snapchat, Facebook, Reddit, X, Threads, Twitch and Kick. Critically parents can’t give consent to override the ban and tech companies face fines of up to $50 million AUD if they fail to take reasonable steps to prevent under-16s from holding accounts. Penalties fall on platforms – not on children or families.

Governments across Europe and Asia began advancing proposals to follow suit positioning Australia as a test case for regulators everywhere. Jonathan Haidt, the social psychologist behind The Anxious Generation, praised Australia for “freeing kids under 16 from the social media trap,” though he noted there would be difficulties in the early months.

Now just four months in we have the first large-scale evidence of how the Australia social media ban is actually working.

The Molly Rose Foundation has published results of polling conducted by YouthInsight, Australia’s largest youth panel, surveying 1,050 young Australians aged 12 to 15. The findings represent the most comprehensive first-hand account of the ban’s impact on the young people it was designed to protect. The headline finding:

61% of 12 to 15 year-olds who previously held accounts on restricted platforms still have access to at least one active account.

More than half of previous TikTok, YouTube, and Instagram users remain able to access these platforms, and 70% of those still using restricted sites said it had been straightforward to get around the restrictions.

Nearly half described it as “very easy.”

How are they doing it? Largely by doing nothing. Between 60% and 64% of children still using the major platforms report that no action was taken by the platform to remove or deactivate their account in the first place. The platforms have largely failed to detect that their users are under 16.

Around a quarter of children simply got around age checks to keep using pre-existing accounts with smaller numbers using workarounds such as getting a friend or family member to create a new account on their behalf or using a VPN. The compliance failure sits fully with the technology companies – not with the children.

What about safety? Over half – 51% – of children who used restricted platforms before the ban say it has made no difference to how safe they feel online.

One in seven (14%) say they now feel less safe.

Among children who had genuinely lost access to all their accounts 12% still felt less safe – a finding the research attributes to displacement towards smaller or less well-moderated platforms or a sense that tech companies have prioritised access restrictions over genuine harm reduction.

The report references South Korea which introduced a midnight gaming ban, similar to the Australia social media ban, for children in 2011. While it initially reduced time spent online, those gains steadily eroded and within four years internet use had increased again. Subsequent evaluations found sleep improved by an average of just 1.5 minutes per child, with internet addiction declining by less than one percentage point.

None of this means the problem being addressed is anything other than real or that doing nothing is a real alternative. 

There are some hopeful signals – 50% of children who used restricted platforms before the ban say they are now spending less time online and among those who genuinely lost account access around 45% reported positive impacts on mental health, sleep, and academic performance.

But the Molly Rose Foundation is clear that it would be “deeply imprudent” for the UK or other countries to rush into a similar ban on this early evidence and recommends instead that governments focus on strengthening regulation to create genuine duties of care that hold platforms accountable for harm reduction by design.

So where does this leave parents and teachers globally who are navigating the same landscape but without the Australia social media ban?

The most useful thing adults can do right now is resist the temptation to treat restriction alone as a solution. Simply banning or removing devices without conversation tends to drive usage underground rather than eliminate it – exactly the dynamic playing out in Australia.

Instead focus on building genuine digital literacy. Talk openly with children and young people about the design choices platforms make intentionally – infinite scroll, algorithmically curated content, notification systems engineered for compulsion. Children who understand why these features exist are better placed to notice their effects on themselves.

In schools – that means creating space to discuss online experiences honestly and without judgement – and recognising that for many young people, particularly those who are LGBTQ+, neurodivergent or socially isolated, social media platforms aren’t just entertainment but can be a meaningful source of community and support. Removing access without replacing it with alternatives could cause real harm.

For parents – regular and curious conversations about what children are seeing and doing online are consistently more effective than surveillance or sudden bans. Agree on boundaries collaboratively where possible and model responsible tech use yourself – stay informed. 

The early evidence from Australia suggests that even well-resourced, legislated attempts to protect children can fall short when the underlying responsibility of technology companies to design safer products is not enforced. That accountability gap doesn’t disappear with a ban – and it won’t disappear at home without deliberate effort either.