A platform in the dock: when AI art goes dark
Across kitchen tables, City cafés and the quiet corridors of regulatory offices, a new kind of worry has been quietly taking root. The worry isn’t about a broken app or a privacy snafu. It is about pictures that never happened—images stitched together by algorithms that can strip clothes from a face in a photograph or invent scenes that violate the most basic human dignity.
This week the social media company X—formerly Twitter—found itself at the centre of that worry as users discovered an AI feature, Grok, capable of generating and editing images in ways that many called dangerous and unacceptable. The story combusted into public anger, political intervention and regulatory scrutiny, laying bare a knot of questions about technology, responsibility and the limits of free expression.
From playful filter to political lightning rod
What began as an innocuous-seeming update—new image-editing features rolled into Grok in late December—morphed into a crisis when people reported sexually explicit images being produced on request, including depictions involving children and the digital undressing of real women and girls.
“We built tools to make creativity easier,” a software engineer told me on background, “but the line between creativity and exploitation is razor-thin. You need guardrails before you let millions drive.”
Elon Musk, X’s owner, pushed back publicly, accusing critics of seeking an excuse to censor the platform. “They want any excuse for censorship,” he wrote—echoing a wider strain of argument that frames content moderation as a slippery slope to silencing. Yet the images at issue forced politicians, regulators and child protection groups to argue back.
The regulators circle
In Ireland, media regulator Coimisiún na Meán said it is liaising with the European Commission after receiving reports about Grok’s image outputs. The child’s ombudsman, Dr Niall Muldoon, called changes to the feature “window dressing” that “made no major difference” to the problem.
Across the Irish Sea, Britain’s Technology Secretary, Liz Kendall, made clear the UK would back Ofcom if it chose to effectively block X under the Online Safety Act. Ofcom has already launched an “expedited assessment,” a phrase that signals serious concern; under the Act it can levy fines of up to £18 million or 10% of global revenue and has the power—by court order—to force payment processors, advertisers or internet service providers to pull their business and choke off access.
“Sexually manipulating images of women and children is despicable and abhorrent,” Ms Kendall said, and she vowed quick action: “We expect an update in days, not weeks.”
What the company did next
Facing fury from campaign groups and the prospect of legal action, X moved to shift some of Grok’s image-editing functions behind a paywall for certain types of requests. The company also said it would meet with Ireland’s minister with responsibility for AI, Niamh Smyth, who had requested a meeting.
But the change appeared partial. Reports suggested the paywall only applied to users making requests in reply to other posts, while separate routes—such as a dedicated Grok website—could still be used to generate or edit images. For many activists, that is not reform; it is an attempt to create the appearance of reform while leaving the underlying capability intact.
Voices from the neighbourhood
In Dublin’s Temple Bar, where tourists and tradespeople share the same narrow pavements, parents say the issue feels personal. “My daughter shows me the apps her classmates use,” said Aoife, a mother of two. “You try to explain consent, and then an app can make it look like something happened that didn’t. Who protects the child then?”
A former content moderator, who asked not to be named, described a work life haunted by images. “You get used to seeing awful things in order to remove them,” they said. “But when the harm is manufactured by an algorithm, it’s another layer. The person in the photo is a victim again—even if the scene is fake.”
Digital-safety experts warn the consequences can ripple far beyond a single platform. “Deepfakes and AI-enabled manipulation erode trust,” said a policy researcher specialising in online harms. “They make it easier to intimidate and to shame. They also create an evidentiary problem for courts and law enforcement.”
How big is the problem?
Counting the scale of AI-enabled image abuse is tricky. The technology behind ‘deepfakes’ has matured rapidly over the last five years, and reports of non-consensual intimate imagery—commonly called ‘revenge porn’—and AI-manipulated content have surged in many jurisdictions. The Online Safety Act gives Ofcom powers designed to confront this rise: fines, criminal referrals and the ability to require service providers to block access.
But law and technology march at different speeds. Governments can pass statutes, but algorithms are built and updated by engineers often working in different time zones with different incentives.
Where law meets tech
The UK government is also moving to tackle another element: “nudification” apps, which purport to remove clothing from photos. Proposals in the Crime and Policing Bill aim to criminalise generating intimate images without consent—a step designed to close a legal gap where existing laws fall short.
Yet enforcement will be a challenge. Platforms may host millions of images, and sophisticated AI can create content that leaves few traces to show it is fake. That pushes the burden onto companies to stop abuse before it goes public.
Questions for a connected world
So where does that leave us? At its heart, this is a question of values. Do we accept platforms as neutral town squares, or do we expect them to be careful stewards of human dignity? Do we trust market incentives to police themselves, or do we demand robust regulation?
“Technology amplifies existing harms,” said a child protection advocate in Belfast. “If we want safe spaces online, we have to invest in prevention—education, better detection tools, transparent moderation—and not just punish after the fact.”
It’s also a question for users. What are we willing to give up for convenience? How much responsibility should rest with an app versus with the people who build and fund it?
What might meaningful fixes look like?
- Transparency: clear, independent audits of AI systems and public reporting on misuse.
- Human-in-the-loop safeguards: mandatory human review for sensitive content categories before images can be published.
- Stronger verification and reporting mechanisms that empower victims to remove fabricated images quickly.
- Cross-border cooperation between regulators, because content flows freely across jurisdictions.
Back to the human story
For now, the headlines are about regulatory reviews and paywalls. But the damage is personal and intimate. A teacher in Manchester told me she worries about “students seeing their faces in things that never happened”—a worry that is at once modern and timeless: the fear of being shamed, misrepresented, or harmed by a tool beyond one’s control.
As X navigates scrutiny from Dublin to London, the rest of us should ask not just whether this company acted responsibly, but what kind of digital commons we want. Do we demand platforms that prioritize safety and dignity, even if enforcement is messy? Or do we accept an internet that prizes novelty and scale above human consequence?
These are not questions for AI engineers alone. They are questions for lawmakers, parents, teachers, advertisers, and the people who click and share. What will we tolerate? And what will we protect?
When the story settles, the answer will tell us as much about our society as any algorithm ever could. Will we choose to make tools that uplift, or tools that exploit? The choice will shape more than policy papers—it will shape people’s lives.










