Monday, February 16, 2026
Home WORLD NEWS Starmer seeks new powers to tighten regulation of online access

Starmer seeks new powers to tighten regulation of online access

9
UK minister says talk of plan to replace Starmer not true
Keir Starmer's Labour Party is languishing in the polls

Building a digital fence: Britain’s bid to shield children — and the slippery trade-offs

On a damp January morning in central London, a mother watches her son scroll through videos on his phone while they wait for the bus. She laughs at a clip, then winces as a darker clip auto-plays. “I don’t want him seeing everything,” she says, folding a scarf tighter around her neck. “But I also don’t want to become a spy.”

That uneasy double-take — between protection and intrusion — now sits at the heart of a new push from Downing Street. Prime Minister Keir Starmer has signalled a drive for broader powers to regulate online access for children, arguing that the law must sprint to keep pace with rapid technological change. Behind the rhetoric are concrete proposals: an Australian-style ban on children under 16 using social media and urgent amendments to existing crime and child-protection legislation. The aim, ministers say, is simple: keep kids safer. The consequences, critics warn, are complex.

What’s on the table

The government plans to consult on measures that would give regulators and ministers swifter authority to curb digital harms. Instead of years of primary legislation each time an app or platform invents a new risk, officials want the power to act within months.

That matters in a world where an overnight algorithm tweak can create new exploitative content streams, and where generative AI tools are already capable of producing convincing — and sometimes sexualised — images of people at scale. The UK government has signalled that more AI chatbots will be swept up in a ban on creating sexualised images without the subject’s consent — a move spurred in part by controversies around AI products such as Elon Musk’s Grok.

Legally, the measures are being packaged not as an entirely new bill but as amendments to current crime and child-protection laws. That choice of route is telling: it offers speed, but critics say it could shrink parliamentary scrutiny and public debate — the very things that build trust when rights and freedoms are on the line.

From Imgur to encrypted tunnels: enforcement headaches

Governments are not guessing at the problems. When the UK tightened age-verification rules last year, a number of sites simply blocked British users rather than build systems they said would be invasive. Image-hosting site Imgur, used by millions for memes and conversation, served up blank images to users in the UK. Several adult websites also opted to block access entirely rather than implement what they called insecure and privacy-invasive verification.

Those knee-jerk blackouts expose a central paradox: protections that are too blunt encourage either overblocking or widespread circumvention. Virtual private networks (VPNs), which route your traffic through other countries, are already a common workaround. The government says it will consider restrictions on VPNs as part of its consultation, raising fresh questions about internet freedoms and the technical feasibility of policing encrypted pathways.

Voices from the frontline

“Kids are digital natives, but that doesn’t mean they’re equipped for every dark corner of the web,” says Dr. Aisha Patel, a child psychologist who has advised schools across the UK. “We need safety scaffolds, yes. But scaffolds that don’t stifle curiosity or make children feel policed.” She worries that age bans could push younger teens towards clandestine use rather than safer, supervised engagement.

Across town in a sixth-form common room, 17‑year-old Jamal rolls his eyes at the idea of a blanket ban. “You give us VPNs or older siblings’ accounts, and that’s that,” he says. “If the government wants to stop kids from seeing stuff, they need to teach us to spot the rubbish already.” His friend Anna adds, “Teaching beats banning. I’d rather get taught about consent than be told to stay away.”

For parents like the woman at the bus stop — who asked to be named Clare — the calculus is different. “I work two jobs. I can’t monitor everything. If there’s a law that makes apps less predatory, I’m in. But promise me it won’t mean the state snooping into every message.”

International ripples

Britain’s deliberations come as other countries move in similar directions. Officials in Spain, Greece and Slovenia have publicly declared plans to introduce age-related social media restrictions. The spread of these proposals reflects a global unease: from Rio to Riyadh, democracies and authoritarian regimes alike grapple with how to contain digital harms without throttling speech and privacy.

Those tensions become geopolitical when regulators in one country ask platforms — many of which are US-based — to enforce rules that conflict with other nations’ laws or with the platforms’ own privacy commitments. Digital-rights groups in the United States and Europe have frequently warned of clashes between child-protection ambitions and free-speech norms.

Data and the debate

How dire is the problem? Studies are mixed, but there are clear trends: most teenagers are online daily, and mental-health services report increased referrals related to online harms. Ofcom and other watchdogs have documented widespread access to social platforms by minors, along with concerns about grooming, exploitation and exposure to harmful content. At the same time, social media is a place of community and support for many young people.

“The data tell two stories,” says Professor Liam Ortega, a technology and society scholar. “There’s increased exposure to risk at the same time platforms are where many teens find identity and support. Policy needs to be surgical, not sledgehammer.” He warns against one-size-fits-all bans that might do more damage than good.

Culture, commerce and the cost of safety

Local colour matters. In the UK, family dynamics, school cultures and community resources vary wildly. In seaside towns where youth clubs have closed, screens fill the gap. In wealthier boroughs, parents can afford supervised tech solutions or tutors. A national law interacts with those inequalities.

Commercial realities matter, too. Platforms are profit-driven; building robust age-verification and moderation systems costs money. Some will decline to invest and simply cut off markets. That has already happened. And as companies make market decisions, privacy-conscious citizens may find their choices reduced.

So what now?

The government says it will run consultations and seek evidence before laying down new rules. But speed is the mantra: ministers want the ability to act fast, they say, when a new threat emerges. The question for citizens and lawmakers alike is whether fast action should mean fewer checks and balances.

How should societies balance the safety of children with individual rights and practical enforceability? Is a ban the blunt instrument we need, or should the focus be on education, better design, and targeted tech solutions? And who gets to decide where the line sits?

These are not hypothetical queries. They will determine whether a child in a northern town sees a harmful clip, an adolescent in London finds a support group, or a parent in Brighton can trust that tech companies have their family’s privacy intact. They will also shape how democracies — not just Britain — navigate the messy, urgent business of governing the digital world.

As the consultation opens, expect the debate to be loud, granular, and profoundly human. Because at the end of the policy papers and technical briefings are kids — curious, impulsive, vulnerable, ingenious — and the adults who love them. Which path do we choose for them?