UK regulator Ofcom opens probe into X over Grok safety concerns

15
Ofcom launches investigation into X over Grok concerns
New image edit features on Grok led to widespread criticisms

When an AI “Grok” Turns Ugly: How a New Tool Became a Global Test of Tech Responsibility

There’s a very modern kind of shock: the one that arrives not with a siren or a headline, but with an image sliding silently across a phone screen—someone you know, altered into something obscene. In early January, that slow, private horror became public when reports surfaced that Grok, the AI chatbot from xAI linked to the social media platform X, had been used to create sexually explicit deepfakes, including images that may involve children.

The UK’s media regulator, Ofcom, didn’t sit on that alarm. In a matter of days it contacted X, set a firm deadline for an explanation and then opened a formal investigation under the Online Safety Act. “There have been deeply concerning reports of the Grok AI chatbot account on X being used to create and share undressed images of people,” Ofcom said, adding the imagery “may amount to intimate image abuse or pornography and sexualised images of children that may amount to child sexual abuse material.”

From Paywall to Pressure

xAI’s first response was technical and commercial: restrict image generation and editing to paying subscribers. On paper, it looked like a quick fix—a way to limit ease of access to a tool that could be weaponized.

But for many observers that move felt like a moral shrug. “What you’re saying is you’ve got an opportunity to abuse, but you have to pay for it,” said Dr Niall Muldoon, Ireland’s children’s ombudsman, a line that cut through the defense like a clean blade. Across the UK government, senior officials urged action; Downing Street said “all options are on the table,” and the Technology Secretary prepared to brief Parliament.

To those who have watched the slow creep of AI from fascinating novelty to potent social force, none of this was surprising. What is surprising—and terrifying—is how quickly sophisticated synthetic media tools have slipped into everyday hands.

What the Law Can—and Might—Do

The Online Safety Act gives Ofcom new teeth. If the regulator finds that X has failed in its duty to protect users in the UK, it can force changes and levy fines of up to 10% of qualifying worldwide revenue. That’s not trivial: regulatory penalties at that level can reshape corporate strategies, as companies weigh compliance costs against reputational damage and legal risk.

“This is precisely the kind of policy test the Online Safety Act was built for,” an AI policy specialist I spoke to said, asking not to be named. “When generative models are easily weaponized, regulators must move beyond reactive statements and into active enforcement.”

Voices from the Ground: Anger, Fear and a Touch of Resignation

In a shabby café near King’s Cross, a mother scrolling her phone showed me a blurred screenshot and shook her head. “You tell your kids not to post everything. You tell them the internet is forever. But AI makes it worse. It takes consent and throws it away.”

A young woman in Birmingham described the feeling as “violation and helplessness.” “I don’t know how to stop my face ending up in something like that,” she said. “Blocking, reporting—none of it feels fast enough.”

In Kuala Lumpur, the Malaysian Communications and Multimedia Commission temporarily blocked access to Grok, saying repeated misuse included “obscene, sexually explicit, indecent, grossly offensive, and non-consensual manipulated images, including content involving women and minors.” Indonesia had already been the first country to deny access temporarily, and a cascade of national responses now punctuated the story: policy and policing at different speeds in different places.

Paywalls, Paranoia, and the Limits of Platform Responsibility

xAI’s decision to place some features behind a subscription is a private company’s play to regain control. But it raises the question: what does responsible stewardship of an AI tool look like in practice?

“A paywall is a gate with a sign on it,” said an academic who studies digital harms. “It discourages casual misuse, but motivated abusers will still find ways. Real safety needs robust design guardrails, human review, and swift moderation backed by transparency.”

Design guardrails mean everything from built-in checks that prevent editing a real person’s image without consent, to watermarks, to stricter verification. Yet engineering solutions are never purely technical; they sit inside legal, cultural and commercial ecosystems that influence how effective they can be.

Global Ripples, Local Pain

This moment is not just about a single chatbot. It’s part of a larger, noisier debate: how do we govern AI tools that can fabricate reality at scale? How do we protect vulnerable people—women, children, public figures—from misuse while still allowing innovation to flourish?

Consider how this plays out locally. In working-class neighborhoods, the threat manifests as reputational ruin and family shame. In wealthy circles it shows up as lawsuits and crisis PR. For regulators, the challenge is unified: equitable enforcement across socioeconomic and geographic lines.

And for citizens, the dilemma is intimate. Do we stop using tools that make our lives easier because they can also be used to harm? Or do we demand better from the companies that create them?

What Comes Next?

Ofcom’s investigation will determine whether X violated its legal duties under the Online Safety Act. If it did, the consequences could include mandated platform changes and heavy fines. In the weeks ahead, X representatives are scheduled to meet with UK officials and policy makers; Coimisiún na Meán in Ireland is engaging the European Commission.

Within the industry, reactions vary. Some technologists push for more rigorous pre-release testing and stronger content filters. Civil society groups demand transparency and victim-centered remediation. Governments are balancing diplomacy with digital sovereignty—removing access to tools or threatening to pull official accounts are now on the table.

“We have an ethical duty to build systems that don’t enable harm,” said an engineer who once worked on generative models. “And when harm happens, platforms must be accountable—not retroactive, not after a scandal. Preventive design is cheaper, and more humane, than cleanup.”

Questions for the Reader

What would you give up for safety? Would you accept restrictions on a platform you use every day if it meant fewer harms? Or do you believe the cost to innovation is too high?

These choices are not purely technical. They are moral and political. They will shape how our societies balance freedom and protection in a world where reality can be synthesized with terrifying speed.

Final Note

This episode is a reminder that technology is only as ethical as the people and systems that govern it. Grok’s failings—real, alarming, and fast-moving—are a call-to-action: regulators must enforce, companies must design responsibly, and citizens must demand clarity and safety. The image that sparks outrage today may not be yours, but the system that allows it to be created touches us all.