Saturday, May 2, 2026
Home WORLD NEWS Which countries are considering bans on major social media platforms?

Which countries are considering bans on major social media platforms?

7
What social media bans are being considered by countries?
Pedro Sánchez said measures in Spain would protect children from the 'digital Wild West'

A world trying to parent the internet: who gets to decide what children see online?

On a gray morning in Brussels, an email from the European Commission landed like a splash of cold water across the tech world: regulators said Meta had fallen short of its duties under the EU’s Digital Services Act, failing to stop children under 13 from using Instagram and Facebook. It was a terse, formal rebuke — but the reverberations extend from playgrounds in Sydney to classrooms in Madrid and living rooms in Dublin.

“We don’t want companies treating childhood as collateral damage,” one commission official told reporters. “The law is clear: platforms must take reasonable steps to protect minors.”

That little phrase — “reasonable steps” — is the hinge on which a global debate now turns. How do we protect young minds from addictive algorithms and harmful content without consigning children to a digital blind alley? And who, ultimately, gets the keys to the internet?

From Canberra to Cape Town: a patchwork of rules

Australia lit the first fuse. Late last year, Canberra rolled out perhaps the boldest experiment in online age control: platforms must block users under 16, or face fines that can reach Aus$49.5 million (roughly €28 million). The aim was simple enough — keep children off social apps — but compliance has proved anything but.

The country’s eSafety Commission reported that platforms had blocked about five million accounts under the new rules. Yet the regulator also found “major gaps”; children were still getting through, often by lying about their birthdate or exploiting loopholes in verification flows.

“Kids are ingeniously persistent,” says Anika Wells, Australia’s Communications Minister. “We’re trying to outsmart a generation, but it’s the platforms that must play fair with the rules.”

Across Europe, governments are taking pages from Australia’s book — but writing in different inks. France debated a ban for under‑15s and then locked horns over how to implement it. Spain’s prime minister, Pedro Sánchez, framed his country’s proposal as a shield against what he called the “digital Wild West,” promising to hold executives accountable for harmful content. Norway has proposed pushing the age limit to 16, arguing that childhood needs to be protected from screen‑driven pressures. Austria, Germany, Greece, Slovenia and Poland are all at various stages of drafting or discussing limits.

Meanwhile, the United Kingdom’s parliament voted — three times — against a blanket ban for under‑16s, preferring regulation over prohibition. And the European Commission is trying to stitch an EU‑wide approach by rolling out a technical age‑verification app, which President Ursula von der Leyen says is “technically ready.” Ireland has been named a frontrunner in integrating the feature into a national digital wallet.

Local worries, global friction

On the ground, the effects are messy and human. In Sydney, a mother named Leila discovered her 12‑year‑old son had created a new Instagram profile after his account was blocked. “He says his mates are all there,” she told me. “It’s not just about content — it’s social life.”

In Madrid, Julio, a high school teacher, sees the pressure every day: “Students sneak phones into class, swap accounts, use older siblings’ profiles. The bans can feel like a game.”

And in Dublin, privacy groups worry about the cure being worse than the disease. The Irish Council for Civil Liberties warns that using national identifiers like the Personal Public Service (PPS) number for age checks risks creating a surveillance system where sensitive data is trafficked just to prove you’re old enough to scroll.

Courts, culpability and the architecture of attention

Beyond age limits, something else has shifted: the legal focus from content to design. In high‑profile US trials earlier this year, juries examined whether social apps were intentionally addictive. One Californian jury found that platforms such as Meta and Google were negligent in how they engineered their products — a verdict that opened floodgates to thousands of lawsuits alleging harm to children and teens. One court ordered Meta to pay $375 million; appeals are underway.

“We used to ask whether platforms were doing enough to moderate content,” says Dr. Lena Müller, a child psychologist in Berlin. “Now we ask whether the very architecture of the feed — autoplay, infinite scroll, variable rewards — is itself harmful.”

The European Commission has leaned into the same argument. In February it said TikTok’s design could foster “addictive behavior,” especially among minors. TikTok has denied the claim and signaled it will contest the finding.

Can age verification work — and at what cost?

Technically, verifying age is straightforward if you want to be invasive: passports, national IDs, biometric scans. But the political and ethical costs reverberate. A digital wallet that checks your PPS number could help enforce an age floor, but privacy campaigners see a slippery slope: what if verification systems are hacked, sold, or repurposed?

“We mustn’t trade a child’s safety for a lifetime of digital fingerprints,” warns Siobhan Kelly of the Irish civil liberties group. “There are safer, less intrusive ways to protect children.”

Others argue the tools are necessary. “If platforms know who is a minor, they can tailor experiences and block high‑risk features,” says Alex Cooney, CEO of CyberSafeKids. “But if every country makes its own rule — 14 here, 15 there, 16 somewhere else — the result is chaos.”

Beyond bans: what really makes social media safer?

Campaigners and many young people themselves often say: don’t ban us, reform the product. Teen activists I spoke with implored regulators to tackle specific features — targeted advertising, endless recommendation loops, and opaque engagement metrics that reward extremes.

Here are some policy levers that experts say could be more effective than blanket age limits:

  • Ban or restrict design features that maximize time‑on‑site (autoplay, infinite scroll, algorithmic recommendations).

  • Require default privacy settings for under‑18s and limit targeted ads to adults.

  • Mandate transparency about how engagement signals are used and allow third‑party audits of algorithms.

  • Fund digital literacy programs in schools so children learn to navigate risk, not just avoid it.

Where do we go from here?

It’s tempting to imagine a single, tidy solution: a universal age, a perfect verification app, a global treaty. Reality is messier. Nations will continue to experiment. Courts will parse corporate responsibility. Companies will adapt — sometimes helpfully, sometimes only enough to pass the regulators’ smell test.

But amid the policy wonkery and legal wrangling, the question I keep returning to is simple: what kind of childhood do we want? Do we accept that childhood will be curated, commodified and monetized by attention markets — or do we design systems that permit kids to grow, unhurried, with space to play, learn and fail offline as well as on?

“Technology evolves faster than our institutions,” Dr. Müller told me. “But our children don’t have the luxury of waiting for the law to catch up.”

So the work begins: not just policing who logs on, but reshaping the very place where they meet. Will societies choose to reengineer platforms, fortify privacy, and teach digital resilience — or will they outsource childhood to a handful of architecture firms in Silicon Valley? The answer will not be written in Brussels or Canberra alone. It will be written in playgrounds, bedrooms, courtrooms and parliaments around the world — and in the everyday choices parents, teachers and teenagers make about time, attention and trust.