Grok disables AI image undressing in jurisdictions where it’s illegal
X said it will geoblock parts of its AI image tools to stop Grok, its chatbot, from generating or editing sexualized images of real people, escalating its response to a global backlash and mounting investigations tied to deepfake abuse of women and minors.
In a statement, X’s safety team said it will “geoblock the ability” of all Grok and X users to create images of people in “bikinis, underwear, and similar attire” in jurisdictions where such content is illegal. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” the team said, adding, “This restriction applies to all users, including paid subscribers.”
- Advertisement -
The move follows weeks of pressure on xAI, the developer of Grok, after reports that users could produce sexualized deepfakes of women and children. Law enforcement and regulators across multiple countries are now probing whether X and Grok breached existing protections against child sexual abuse material and nonconsensual imagery.
Irish authorities said there are 200 active investigations into child sexual abuse-related images generated by Grok. Detective Chief Superintendent Barry Walsh, who leads the Garda National Cyber Crime Bureau, confirmed an ongoing investigation in relation to Grok.
Ireland’s government will convene a roundtable next week to consider options for combating AI-generated sexual imagery, including content targeting children. The issue was discussed at a meeting between the Taoiseach and Minister of State for AI Niamh Smyth. Smyth said Grok should be banned in Ireland if X fails to comply with Irish law governing the creation and dissemination of sexualized images of both children and adults, and she emphasized that enforcement provisions are in place.
In the U.K., media regulator Ofcom said it has opened a probe into whether X failed to comply with national law over the sexual images tied to Grok. French commissioner for children Sarah El Hairy said she had referred Grok-generated images to French prosecutors, the Arcom media regulator and the European Union, signaling a widening European response.
Authorities in Asia have moved decisively as well. Indonesia became the first country to block access to Grok entirely, with neighboring Malaysia following. India said X removed thousands of posts and hundreds of user accounts in response to official complaints about the content.
X’s new geofencing approach suggests it will tailor image-generation and editing features by country, restricting or disabling them where laws prohibit such material. The company did not specify how it will determine jurisdictions or enforce boundaries beyond the stated geoblocking and technical filters, and it did not say whether further global restrictions are under consideration.
The controversy underscores the speed at which generative AI tools can be weaponized for abuse and the difficulty platforms face in curbing misuse without disabling widely used features. Regulators are increasingly treating AI-enabled image editing and deepfakes as a continuity of existing harms—especially child sexual abuse material and nonconsensual sexual imagery—rather than an entirely new category, enabling faster enforcement under current law.
For X and Grok, the stakes are legal as well as reputational. Governments are testing whether existing statutes and platform liabilities apply to generative systems, while victims’ advocates are calling for blanket bans on tools that can fabricate sexualized images of real people. X’s commitment to geoblock and technically restrict Grok marks a notable shift in posture, but authorities in Ireland, the U.K., France, Indonesia, Malaysia and India are signaling that compliance—and demonstrable results—will be the test.
Elon Musk, who owns X, has championed rapid AI development through xAI, positioning Grok as a rival to other chatbots. The latest curbs illustrate the growing tension between AI innovation and safety obligations as platforms confront the spread of sexualized deepfakes and child abuse content online.
By Abdiwahab Ahmed
Axadle Times international–Monitoring.