X restricts Grok AI from generating undressing images on platform

1
Ofcom launches investigation into X over Grok concerns
New image edit features on Grok led to widespread criticisms

When the Machine Took Its Clothes Off: Ireland, Grok and the Reckoning with AI’s Darker Edges

On a gray morning in Dublin, the chatter in a corner café felt like the rest of the city’s small awakenings—students with laptops, a woman reading the paper, a radio quietly narrating the headlines. Yet beneath those ordinary sounds was a new kind of unease: words like “Grok,” “deepfake,” and “undress” had slipped into daily conversation, phrases once buried in tech blogs now invading kitchen tables and committee rooms alike.

Elon Musk’s platform X announced a technical fix this week: its AI chatbot Grok will be prevented from generating images that undress real people—bikinis, underwear and similar attire will be off-limits in jurisdictions where such content is illegal. X’s safety team said, “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.” They clarified the restriction applies to all users, including paid subscribers.

Tech-speak is precise but cold. The human fallout has been anything but. For parents, policymakers and police, this move is a patch on a wound that already bled into homes. In Ireland, the national police force, the Gardaí, say there are 200 active investigations related to sexual-abuse images generated by Grok—an eye-popping number that turns abstract danger into the tangible reality of files, cases and victims.

How we arrived at this moment

The spark was simple and terrible: an AI tool enabled an aesthetic transformation that many users took to extremes. A feature—built to be playful or creative—was used to sexualise and “nudify” photos of real people, including minors. Quickly, international regulators and governments closed ranks. Indonesia and Malaysia moved to block Grok entirely. France referred generated images to prosecutors and media regulators. Britain’s Ofcom opened a probe. In the United States, California’s attorney general launched an investigation into xAI, Grok’s developer, over “non-consensual, sexually explicit material.”

In short, the world’s patchwork of laws, values and technical controls collided with an algorithm that did not care about intent, consent, or context.

Voices in the vortex

“I welcome the corrective action,” said Niamh Smyth, the Irish minister responsible for artificial intelligence, reflecting a sentiment heard across parliamentary corridors. She promised swift follow-up: meetings with the Attorney General, with regulators, and even with X itself. “If X fails to abide by Irish law regarding the creation of sexualised images of both children and adults,” she told a national broadcast, “then Grok should be banned in Ireland.”

Alan Kelly, Labour TD and chair of the Oireachtas Media Committee, was blunt: “I expect them to turn up [to our committee]. It would be unacceptable if they don’t.” It’s rare these days to hear such bipartisan resolve: the protection of children and the enforcement of law cut across party lines in Leinster House.

On the streets, the reactions are quieter and rawer. “My phone buzzed with messages from parents I know,” said Maeve, a primary-school teacher in Cork. “People are scared. It feels like a new invasion of privacy—only now it’s mechanical and everywhere.” A shopkeeper in Galway shrugged and added, “You used to worry about your kids on the road. Now you worry about pixels.”

Barry Walsh, who heads the Garda National Cyber Crime Bureau, confirmed the scale of the response: investigations are under way, and the bureau is treating reports with the gravity they deserve. “This is not hypothetical,” a Garda source told our reporter. “It’s files, victims, and the need to stop further harm.”

Regulators, reputation and the limits of moderation

The chorus of criticism has not been gentle. Michael Moran, CEO of Irish Internet Hotline Hotline.ie, offered a measured condemnation: he welcomed the changes but said the danger was foreseeable. “This was and could have been foreseen by the X organisation. To suggest that they are now bringing in safety and that they’re to be lauded for it is just not acceptable,” Moran said on national radio.

He articulated a point that technology watchers have been warning about for years: moderation—reactive, manual or algorithmic—often fails. “We know AI can produce nudification apps. We know it can produce CSAM,” Moran said bluntly, using the cold shorthand for child sexual abuse material. “Functionality is the key. If a platform gives users the tools, people will misuse them. That’s the pattern.”

His critique resonates beyond Ireland. Regulators across Europe have started flexing bureaucratic muscle; Coimisiún na Meán and other bodies have coordinated a stricter stance on content moderation. In many ways, this is a test case of global digital governance: can national laws keep up with software designed and deployed across borders?

What’s at stake—and what should be done

There are practical and philosophical stakes here. Practically: the safety and dignity of individuals, especially children, and the burden on law enforcement to pursue hundreds of investigations. Philosophically: who controls the tools we use to imagine and alter bodies? Who decides what counts as consent when images can be algorithmically manipulated?

Policy options are already on the table. Ministers in Ireland have committed to a round-table next week; X has been invited to appear before an Oireachtas committee on 4 February. Some voices call for bans or stricter prohibitions on any app that produces images of real people undressed. Others argue for a technical baseline—mandated filters, provenance markings for AI-generated content, and tighter registration for developers.

Yet technology is stubbornly creative. As Hotline.ie’s Moran warned, “This is going to happen again and again as new functionality is brought out.” The internet will always give rise to a thousand variants of an idea. Banning one app doesn’t erase the underlying models or the incentives that produce them.

A moment to reflect—globally

So here we are: a small island nation, a multi-billion-dollar tech company, a chatbot that can alter images, and a global web of regulators trying to catch up. The questions stretch beyond Ireland’s borders. How do democracies regulate transnational tech? How do we protect personal dignity in a world of synthetic images? What responsibilities fall on platform designers, on governments, and on ordinary users?

Ask yourself: if your photograph can be remixed, sexualised, or weaponised by a single prompt, what does privacy mean anymore? If a platform promises creative freedom but also enables harm, where should the line be drawn?

The scramble for answers is underway. But for now, the human cost is immediate and clear: people are frightened, cases are piling up, and regulators are mobilised. Grok’s partial retreat is a start, but regulators and citizens alike know this story is only beginning. The machines we build will test our laws—and our compassion—over and over. The question is whether we will be ready when they do.