Monday, February 2, 2026
Home WORLD NEWS Snapchat suspends 415,000 underage accounts as Australian ban takes effect

Snapchat suspends 415,000 underage accounts as Australian ban takes effect

27
Snapchat blocks 415k underage accounts amid Australia ban
Platforms including Snapchat, Meta, TikTok and YouTube must stop underage users from holding accounts under the legislation, which came into effect on 10 December

Australia’s digital curfew: a law to protect kids — and a new kind of backyard debate

On a humid December morning, when school holidays were still a recent memory and the surf at Bondi was dotted with kids learning to stand on boards, Canberra quietly flipped a switch that has tech companies, parents and privacy advocates arguing in different registers about what it means to be safe online.

The law, effective from 10 December, requires big platforms to prevent people under 16 from holding accounts on services such as Snapchat, TikTok, YouTube and Meta’s apps — a world-first attempt to legislate the online lives of teenagers. In the months since, tech firms and Australia’s eSafety regulator have been at work: eSafety says 4.7 million accounts have been blocked systemwide, while Snapchat reports it has disabled about 415,000 Australian accounts it believes belonged to under-16s as of the end of January.

What the law aims to do — and what it doesn’t

At its heart, the legislation is blunt and simple: prevent underage users from accessing large social platforms. Companies that fail to take what the law calls “reasonable steps” could face fines of up to AU$49.5 million. For a nation of roughly 26 million people, the move is emblematic of growing impatience with platform-led solutions to harms from sexual predation to grooming, disinformation, and the mental-health fallout linked to endless scrolling.

But blunt instruments cut both ways. The policy presumes that age can be reliably verified and that exclusion equals protection — assumptions that have prompted vigorous pushback from the platforms themselves and unease among advocates who worry about unintended consequences.

A messaging app’s plea: don’t isolate teens from their friends

Snapchat, which many teenagers use chiefly to message close friends and family, says it has been enforcing the rule and continues to “lock more accounts daily.” But the company also warned that age-estimation technology — whether based on self-declared data, AI-driven face or behavioral signals, or document checks — can be off by two to three years. In practice, that could mean a 15-year-old slipped through the net, or a 17-year-old unfairly cut off.

“We understand and share the goal of keeping young people safe,” a spokesperson for Snapchat told me. “But an outright ban risks severing the most important social ties for teens, and our view is that there are smarter, more nuanced ways to keep kids safe while respecting their need to stay connected.”

Across town, a Melbourne high-school teacher, Leah Nguyen, framed the quandary differently. “If you stop teenagers from using the apps they use to talk to mates about homework, mental health or even to organise a house party, you’re reducing their options to seek help,” she said. “We need to teach digital literacy and supervision, not build a wall.”

How technology struggles with the soft edges of age

Age verification isn’t a single button you press. It’s a patchwork of techniques — self-reported dates of birth, ID checks, biometric facial analysis, and machine-learning estimates based on behavior. Each has trade-offs.

  • Self-declared ages are trivial to falsify.
  • ID checks can be privacy-invasive and exclusionary for those without formal documents.
  • Biometric methods raise thorny questions about data retention, misuse, and bias.
  • AI estimates introduce skew and inaccuracy; a few years’ error margin is significant when the cutoff is 16.

“The technology is improving but it’s not magic,” said Dr. Samir Patel, a researcher in digital rights. “Estimating age from a photo or interaction data can be wrong in hundreds of thousands of cases. And when governments use legislation to force fast adoption, vendors can rush imperfect systems into production.”

App stores, the missing link?

Both Snapchat and Meta have urged Australia to push the responsibility up the chain to app stores. The idea: require Apple’s App Store and Google Play to verify the age of users before allowing downloads, creating a centralized checkpoint that’s harder to circumvent.

“If app stores were obliged to act, that would raise the bar for circumvention,” an industry analyst in Sydney suggested. “But it also concentrates extraordinary power in the hands of two companies, and creates fresh privacy questions: who verifies, how the data is stored, and what happens if the system itself is breached?”

Local lives and global questions

Walk through a suburban playground in Perth or a laneway café in Brisbane and you’ll see the human stakes. Parents like Marcus Allen, a father of two in Wollongong, balance anxieties about strangers and the soundtrack of his teenage son’s social life. “I want my kids safe,” he said. “But I don’t want them to be ostracised. Teenagers need spaces to talk. Cutting them off can push conversations into darker, less visible corners.”

Across the globe, countries are wrestling with similar dilemmas. The European Union’s Digital Services Act brought new responsibilities for platforms, and the United Kingdom has explored age-verification measures and content protections. Australia’s law is the first to impose an across-the-board cutoff at the platform level — and that invites scrutiny about whether regulatory zeal could produce more harm than good.

Wider implications: privacy, inequality, and enforcement

There are deeper currents here. Tightened verification systems can entrench inequality: migrants, refugees, and poor families may lack government IDs. Biometric checks can disproportionately misidentify people of certain ethnicities. And enforcement is costly — surveillance at scale is expensive, and the penalties, while heavy, don’t automatically improve systems.

“We need to ask who pays for enforcement and whose rights are sidelined,” Dr. Patel said. “Legislation is not enough without transparency, independent audits, and avenues for appeal.”

The human terrain of a digital policy

Policy debates often lose sight of the messy, human moments: a teenager confiding in a friend about anxiety at 2 a.m.; a parent discovering troubling messages and needing evidence to show a counselor; an introverted child who only feels comfortable connecting through a specific app. The law treats accounts as units to be blocked or allowed, but behind every username is a person with a story.

“My daughter’s circle is on Snapchat,” said Ava Thompson, a mother in Sydney. “If she’s suddenly cut off, she may find another app that’s harder for me to monitor. These rules should come with investment in education, family support and better helplines, not just fines.”

Where do we go from here?

This is a global puzzle: how to protect children without hampering their social development or trampling privacy. Australia’s experiment will yield data. Will it reduce harm? Will it erode privacy? Will tech companies build safer, more privacy-preserving ways to verify age, or will young people find even more elusive channels? The answers will matter far beyond Canberra’s precincts.

For now, the country is watching, parents are anxious, platforms are tinkering, and teenagers are — as teenagers will — working out how to live in a world where the border between online and offline is policed in new ways.

So I’ll leave you with a question: if safety demands limits, who gets to set them — and at what cost to connection, privacy, and the messy business of growing up?