X cracks down on Grok’s AI-generated undressing deepfakes
X will restrict its AI chatbot Grok from generating or editing sexualized images of real people, moving to geoblock “bikinis, underwear and similar attire” content where such activity is illegal, after a global backlash over the creation of deepfakes of women and children and mounting pressure from regulators.
“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” X’s safety team said. “This restriction applies to all users, including paid subscribers.” The company added it will “geoblock the ability” for Grok and X users to create images of people in revealing clothing in jurisdictions where those actions are unlawful.
- Advertisement -
The announcement follows weeks of criticism aimed at Elon Musk’s xAI, which develops Grok, after users exploited the chatbot’s image features to produce sexualized deepfakes. Law enforcement and child-safety groups warned that so-called “nudification” tools can rapidly proliferate harmful content, including material involving minors.
In Ireland, Gardaí said there are 200 active investigations into child sexual abuse-related images generated by Grok. Detective Chief Superintendent Barry Walsh, head of the Garda National Cyber Crime Bureau, confirmed an ongoing investigation related to the chatbot. Government ministers plan to meet next week to assess how to curb AI-generated sexual content and material linked to child sexual abuse, following talks between the Taoiseach and Niamh Smyth, the minister of state responsible for AI. A roundtable is scheduled for next week.
Smyth said Grok should be banned in Ireland if X fails to comply with Irish law governing the creation of sexualized images of both children and adults, arguing that the legal framework already exists and requires enforcement. Speaking in a broadcast interview, she said authorities need to act on laws covering the creation and dissemination of AI-generated sexual imagery.
Lawmakers signaled they want answers from X. Labour TD Alan Kelly, chair of the Oireachtas Media Committee, said the platform has been invited to appear on Feb. 4 and called it “unacceptable” if executives refuse. “I expect them to turn up, I expect them to address these issues,” he said, adding that “moderation does not work” for AI-generated sexual content. “These AI tools should not be allowed to do this, it’s as simple as that, and it is up to the platforms to deal with this.”
Kelly argued that if European action is too slow, Ireland should move on its own. “We have robust laws,” he said, but they need to be complete enough to protect children and adults across platforms where AI tools can sexualize or “nudify” people.
Irish child-safety advocates welcomed the clampdown but criticized X for reacting only under intense scrutiny. Michael Moran, CEO of Hotline.ie, the Irish Internet Hotline, said the misuse “could have been foreseen.” “You have to welcome any change,” he said, “but I think the way it has happened over the last month really needs to be looked at. All of this was and could have been foreseen by the X organization. To suggest that they are now bringing in safety and that they’re to be lauded for it is just not acceptable.”
Moran said the “functionality was key” in addressing the unregulated use of AI. While major firms will eventually moderate, he warned that other AI engines with little oversight will continue to enable harmful outputs. He called X’s move “a win for Coimisiún na Meán” and for European regulators who reacted quickly—yet urged lawmakers to close gaps: in Ireland, sharing images of a person in a state of undress is illegal, but generating such content is not. Apps that produce those images “should be banned, or also made illegal,” he said, adding: “Make it illegal for AI to produce CSAM [child sexual abuse material] in the first place.”
Scrutiny is widening beyond Ireland. Britain’s Ofcom opened a probe into whether X failed to comply with U.K. law concerning sexual images generated through the platform. In France, the commissioner for children, Sarah El Hairy, said she had referred Grok-generated images to prosecutors, the media regulator Arcom and the European Union. Indonesia became the first country to block access to Grok entirely, with neighboring Malaysia following; India said X removed thousands of posts and hundreds of user accounts in response to complaints.
X’s new safeguards point to a growing compliance challenge for global platforms. By geoblocking and restricting specific image categories only in jurisdictions where they are illegal, companies risk applying uneven protections that depend on where a user lives or how a request is routed—an approach critics say can leave loopholes. Regulators, meanwhile, are signaling they expect “safety by design,” not reactive moderation, particularly when generative AI tools can be rapidly misused at scale.
For Ireland, next steps include the government’s roundtable and the Oireachtas committee hearing, which could shape whether authorities seek platform-specific bans, tighter rules on AI image generation, or broader obligations under media and online safety laws. Across Europe and Asia, investigations and outright blocks reflect a converging view: when AI products enable sexualized deepfakes—especially those involving children—platforms will be held responsible for preventing harm, not just removing it after the fact.
Whether X’s measures satisfy that demand will likely hinge on how quickly and comprehensively the company enforces them, how it handles evasion and repeat abuse, and whether xAI limits Grok’s “nudification” capabilities at the source. As Moran put it, “Let’s see that it is actually removed as an actual possibility.”
By Abdiwahab Ahmed
Axadle Times international–Monitoring.