EU Opens Regulatory Probe into X’s AI Tool Grok Amid Scrutiny

12
EU launches investigation into X's AI tool Grok
Elon Musk's Grok chatbot is to be ⁠investigated on whether it disseminates illegal content such as manipulated sexualised images in the European Union

When an Algorithm Crossed a Line: Europe’s Crackdown on X’s Grok

It began, as many modern scandals do, with a single image and the slow, sickening realization that what felt like an isolated incident was anything but. Photos—manipulated, sexualized, impossible children—were surfacing on X. They moved across timelines, hopped between accounts, and lodged in the feeds of ordinary people who had logged on for news or jokes or to check on a friend.

The European Commission has answered with a formal investigation into Grok, the artificial intelligence engine embedded inside X, probing whether the tool enabled the spread of sexually explicit images, including material that may amount to child sexual abuse. The inquiry is being conducted under the Digital Services Act (DSA), the EU’s most ambitious law yet aimed at reining in platform harms.

What’s at stake

Ask any parent why they care about this investigation and they will not talk in policy jargon. “If someone can stitch my child into a degrading image and it spreads inside a heartbeat,” says Aoife Brennan, a mother of two in Dublin, “how am I supposed to protect them?”

The questions the Commission is asking are stark: Did X treat Grok as just another feature and fail to assess the risks it posed? Did the company meet its legal obligations to prevent systemic harms? And if it didn’t, what will the consequences be?

DSA: More than a rulebook

The DSA, which creates special obligations for so-called Very Large Online Platforms (VLOPs) with more than 45 million monthly users in the EU, requires careful, transparent risk assessments. Platforms designated as VLOPs must anticipate and mitigate systemic risks—ranging from the spread of illegal content to effects on public health and the rights of children—before, during, and after rolling out powerful features.

According to the European Commission, Grok is conspicuously absent from the risk assessment reports X is required to publish. “We expect companies to get their house in order,” a Commission spokesperson told us. “Grok doesn’t appear in those assessments. That omission is not a minor paperwork issue; it’s central to whether X complied with the law.”

The discovery and the response

News of sexualized deepfakes triggered alarm across Europe. Advocacy groups, parents, and regulators raised their voices. “These images aren’t just distasteful—when they involve children or are non-consensual, they’re a form of violence,” says Dr. Miriam Kovács, an academic who studies online abuse. “We have to treat them as such.”

The Commission coordinated closely with Ireland’s Coimisiún na Meán, the national digital regulator, because X’s European headquarters are in Dublin. Regulators there say they welcomed the formal probe and reminded people that legal responsibilities exist for online platforms under both national and EU law. “There is no place in our society for non-consensual intimate imagery or child sexual abuse material,” the regulator said in a public statement.

Inside EU agencies, the concern was not theoretical. Technical teams at the Centre for Algorithmic Transparency (ECAT) in Seville had been watching Grok since reports of a surge in hateful content last autumn. Their monitoring, combined with complaints received from users, created a picture of systemic problems linked to how the AI was being used on the platform.

Voices from the front line

On a rainy morning in Dublin’s Temple Bar, I spoke with Conor Maher, a freelance photographer whose younger sister’s likeness was subtly altered and circulated online. “The first thing you do is scream, then you call your family, then you try to chase it down,” he said. “But chasing a photo across the internet feels like trying to stop the tide.”

Policy experts were less emotive but equally blunt. “Companies have to take responsibility when they deploy models that can create lifelike images of people,” said Lina Ortega, a digital rights lawyer. “Mitigations like filters, robust reporting mechanisms, and pre-launch risk assessments aren’t optional. They’re central to preventing harm.”

Some voices urged caution before rushing to ban tools. “AI is a tool; it can be misused,” said Julian Weiss, a tech entrepreneur in Berlin. “But the right response is smart regulation and enforcement, not panic.” The Commission’s investigation is precisely about whether existing rules were followed.

What the investigation will do

The formal opening of proceedings under the DSA gives the Commission broad powers: it can request documents, interview staff, and carry out inspections. If the inquiry finds non-compliance, X could face additional enforcement measures on top of recent penalties.

In December, the Commission fined X €120 million over issues including deceptive design and insufficient transparency around advertising and data access for researchers. That financial penalty underscored a broader point: being large in Europe’s digital landscape brings responsibilities—and consequences when those responsibilities are not met.

Areas under scrutiny

  • Whether X conducted sufficient risk assessments of Grok and its integration into platform features.

  • Whether Grok’s capabilities materially increased the dissemination of illegal or non-consensual sexual imagery.

  • How the platform’s recommender systems—now under a separate but related probe—interact with Grok to amplify harmful content.

Local details that matter

Walk through Dublin, and you see more than government offices. You see a city where tech interns line up for flat whites, where natal posters for community theatre hang beside murals, where conversations about privacy and family safety hum in cafés. That is partly why the Irish regulator’s role feels intimate; this is not an abstract legal battle but one that touches families and neighborhoods.

In Seville, where technical teams have been plotting the algorithms’ behavior, engineers and ethicists have been poring over logs, looking for patterns. “We’re tracing how prompts travel, how images are generated and amplified,” said a researcher involved in monitoring Grok. “It’s like modern detective work—only the clues are data points and the suspects are code.”

A global ripple effect

As Europe tightens oversight on AI-powered platforms, the consequences will not stop at its borders. How regulators handle Grok could set precedents on transparency, safety, and corporate accountability for platforms from San Francisco to Singapore.

So here’s a tough question for every reader: Do you trust the architectures that shape your attention? Do you believe private companies will act fast enough to protect the vulnerable, or do we need rules that bite harder?

Closing: What comes next

The Commission has said X must provide more information, and officials expect further inspections. Irish and European regulators say they’ll play active roles. Lawmakers—like Irish MEPs who called for suspension of Grok while the probe proceeds—are watching closely.

For families who found their lives disrupted by a manipulated image, the investigation is not just a headline. It is a test of whether laws designed to protect citizens can keep pace with technologies that create harm almost as fast as they create convenience.

“We need systems that put people before profit,” says Aoife Brennan. “If the rules are only words on paper, they are meaningless when someone’s child is being exploited online.”

The EU’s probe into Grok is, then, more than a regulatory skirmish. It is a moment of collective reckoning—a chance to decide how we want a future shaped by AI to look, and who will be held accountable when the machines go wrong.