Tuesday, February 3, 2026
Home WORLD NEWS French cybercrime investigators raid X offices in criminal probe

French cybercrime investigators raid X offices in criminal probe

15
French cybercrime authorities search X offices
The operation also involved Europol (Stock image)

A morning raid in Paris — and a question that refuses to go away: who controls the algorithms?

It began like a scene from a city that tends to dramatise even its routine: uniformed officers slipping through glass doors, security shutters clanging down, a swarm of reporters craning their necks outside an office tower two steps from a rue where cafés were already serving espresso. This time the target was not a bank or a celebrity; it was the Paris outpost of X, the social platform once known as Twitter.

By day’s end, microphones and notepads had been replaced by a far heavier reality. French prosecutors had widened a year-long probe into alleged abuses around the platform’s algorithms and the extraction of user data. The inquiry, which began with questions about automated processing and biased systems, has now grown to encompass the behaviour of X’s artificial-intelligence chatbot Grok and accusations that the platform may have facilitated the spread of Holocaust denial content and sexually explicit deepfakes.

A legal crescendo

The Paris prosecutor’s cybercrime unit, working with national police cyber teams and Europol, executed searches of X’s offices and issued summonses. Elon Musk and former CEO Linda Yaccarino have been ordered to appear for questioning on 20 April. Several employees are also expected to be called as witnesses.

“At this stage, our objective is straightforward,” said a senior Paris prosecutor who spoke on condition of anonymity to explain the work behind closed doors. “We are investigating whether automated systems were allowed to function in ways that breached French law. Platforms operating here must respect our legal framework—no exceptions.”

Legal sources say the probe began after a French MP raised concerns that algorithmic bias could distort automated data processing. From there the scope expanded: complaints arrived about Grok generating harmful content, and separate allegations pointed to the propagation of sexually explicit images, including material that may involve children.

What’s being alleged — and why it matters

The accusations are serious but, for now, remain allegations. Authorities are looking into whether X or its executives knowingly enabled or turned a blind eye to:

  • the manipulation or misuse of ranking and recommendation algorithms;
  • fraudulent automated extraction of user data;
  • the dissemination of Holocaust denial material through the platform;
  • and the sharing or facilitation of sexually explicit deepfakes, potentially including underage imagery.

These are the sort of claims that, if proven, would land a global tech company at the centre of both criminal and regulatory upheaval. “When algorithmic systems touch millions of people every day, the margin for harm is enormous,” says Dr. Sophie Laurent, a digital-rights researcher at a European university. “We’re not talking about edge cases. We’re talking about systemic vulnerabilities that can amplify hate, distort history, and destroy lives.”

Voices from the street: fear, disbelief, frustration

Outside the office that morning, reactions were as varied as you’d expect in a city that doubles as a global media capital. Nadia, a Paris-based podcast producer, shook her head as she waited with a thermos of coffee. “People rely on these platforms to be the public square,” she said. “But if that square is curated by algorithms that are not transparent, then whose truth are we walking into?”

In Dublin, the uproar took on a political tone. Labour TD Alan Kelly called X’s refusal to appear before a media regulation committee “disgraceful,” saying the company was skipping an opportunity to be held to account in front of the Irish public. “Meta and Google have agreed to come in,” he told reporters. “Why is X avoiding scrutiny? We need assurances that this will not happen again, and if a platform refuses to comply, we will change the law.”

A Taoiseach’s office spokesperson confirmed that Dublin had written to X in support of a parliamentary request, and that the matter is being raised at multiple levels, including with Coimisiún na Meán and the European Commission. The Commission has reportedly launched its own formal investigation into Grok.

Industry response — and denials

X has pushed back. In public statements last summer, Elon Musk described early accusations as politically motivated. An X representative told international outlets that the company cooperates with law enforcement and that safety systems are in place to detect and remove illegal content. “We take these allegations seriously and are working with authorities,” a spokesperson said.

But to many observers those words are not enough. “Assurances on paper don’t cut it when people’s privacy and safety are at stake,” said Maria Fernandes, an Irish mother whose teenage daughter discovered a deepfake impersonating a schoolmate last year. “We need real consequences. We need checks that work.”

The wider picture: regulation, technology and a race against time

This isn’t happening in a vacuum. The EU’s Digital Services Act (DSA), which came into force in 2024, already requires large online platforms to take stronger measures against systemic risks. Yet enforcement is complex—the internet is global, companies are mobile, and technology moves at a speed that regulators often can’t match.

Europol’s involvement signals that the issue is being treated as more than a domestic regulatory squabble. The international dimension is unmistakable: data can be pulled across borders, harmful content can be uploaded in one jurisdiction and viewed in another, and cloud-based AI models are hosted on servers scattered around the world.

Sensitivity around AI-generated sexual content is also backed by data. A 2019 study by Sensity Labs (formerly Deeptrace) found that the overwhelming majority of detected deepfakes—roughly 96% at the time—were sexual in nature. While deepfake-detection technology has improved, the creative ease of modern generative systems means the problem keeps evolving.

What’s at stake for everyday users

At heart, this is about trust. Can individuals feel safe posting photos of their families, discussing politics, or searching for news without worrying that an algorithm will auction their attention to the highest bidder, or that their likeness could be weaponised?

“We need clearer transparency: what signals are being used to promote content, who trains these models, and how are falsehoods or abusive images being identified?” asks Dr. Laurent. “Beyond transparency, we need enforceable audit rights, so independent experts can test these systems.”

Questions to ask—and actions to demand

As the legal process unfolds in Paris and political pressure mounts in Dublin, readers might reflect on their own relationship with the platforms that shape public life. How much do you know about the algorithms that decide your news feed? Would you accept a court order banning a platform in your country if it persistently flouted local law? What responsibility should tech giants bear when their tools create real-world harm?

These are not rhetorical questions. They are the contours of a debate that will determine how societies balance innovation, free expression and protection from harm. For now, X faces searches, summonses, and scrutiny—moves that remind us that the internet, for all its borderlessness, can still be held to account by nation-states and international bodies.

Whether that accountability will be swift enough, fair enough, and effective enough is another matter. As the city of Paris slowly returned to its rhythmed life—bakers pulling baguettes from ovens, commuters hurrying along the Seine—the raid left a quieter imprint: a renewed public demand for clarity in how we are governed by lines of code. That demand is unlikely to be satisfied by press releases alone.

So tell me: what would you want to see from a platform that touches millions of lives every day? Greater transparency? Stricter penalties? Or something else entirely?