European Union opens probe into X’s Grok AI chatbot

EU opens DSA probe into X’s Grok over sexual deepfakes, potential child abuse images

The European Commission has launched a formal investigation into Grok, the artificial intelligence tool embedded in X, over its role in the spread of sexually explicit images, including potential child sexual abuse material. The action, taken under the EU’s Digital Services Act, follows public outcry over sexually manipulated images circulating on the platform.

- Advertisement -

“Sexual deepfakes of women and children are a violent, unacceptable form of degradation,” said EU Commissioner for Tech Sovereignty Henna Virkunen, as Brussels signaled it would use the bloc’s new online safety law to scrutinize how Grok is deployed and monitored inside X.

The Commission said it is working closely with Ireland’s media regulator, Coimisiún na Meán, because X’s European headquarters is in Dublin. The Irish regulator welcomed the probe after weeks of engagement with the Commission, noting that European and Irish law impose clear responsibilities on platforms around illegal content and that regulators are ready to enforce them.

At the heart of the case is whether X met its legal duty to assess and mitigate systemic risks tied to illegal content and harm. Commission spokesperson Thomas Regnier said X’s published DSA risk assessments make “one truth” clear: Grok does not appear in them. That omission, he said, suggests the company “has simply not assessed the risk” Grok poses to EU users, despite legal obligations to do so.

Brussels said those risks have already materialized, exposing people in the EU to serious harm. Under the DSA, X must diligently mitigate systemic risks, including the dissemination of illegal content, negative effects related to gender-based violence, and serious consequences for users’ physical and mental well-being stemming from Grok’s features.

Separately, the Commission widened its investigation, first opened in December 2023, into X’s “recommender” systems—algorithms that shape what users see—adding scrutiny of the company’s recent move to a Grok-based recommender. The review will assess whether X has properly evaluated and reduced the related systemic risks.

The Commission said it will send additional requests for information to X and may conduct interviews or inspections. Opening formal proceedings under the DSA gives Brussels greater enforcement powers, including the ability to find the company in non-compliance.

X, designated a “very large online platform” under the law, must assess and mitigate systemic risks throughout its service in the EU—especially risks to minors and from illegal content. Grok, developed by X, has been used since 2024 to generate text and images and to provide contextual information for posts on the platform.

In December, the Commission fined X €120 million over deceptive design, opaque advertising and insufficient researcher data access—violations tied to the DSA. Following the fine levied at the end of 2025, the company was given three months to pay. EU officials also stressed that national regulators and law enforcement play key roles in tackling the dissemination of non-consensual and illegal imagery.

Officials said Grok had been under observation for some time, including after a surge of antisemitic material associated with the tool last autumn. Technical staff at the EU’s Centre for Algorithmic Transparency in Seville were involved in monitoring. Following that, Brussels pressed X for details on its Grok risk assessments; while the company made some changes after “very intense” recent engagement, officials said the systemic risk persists and that X appears to treat Grok as a separate entity rather than fully integrating it into platform-wide risk controls.

Reaction from lawmakers was swift. Irish MEP Regina Doherty welcomed the formal probe, saying that when credible reports emerge of AI systems harming women and children, EU law must be enforced without delay. Fellow Fine Gael MEP Maria Walsh urged the Commission to suspend Grok’s use in the EU while the investigation proceeds.

Coimisiún na Meán said its contact center is available to support people concerned about what they encounter online and emphasized that public reports help regulators hold platforms to account.

By Abdiwahab Ahmed

Axadle Times international–Monitoring.