Elon Musk’s artificial intelligence venture xAI has moved to curb one of the most controversial uses of its Grok chatbot, announcing that the tool will no longer allow the creation or editing of sexualised images of real individuals. The decision follows mounting criticism, regulatory scrutiny across continents and growing concerns over the misuse of generative AI on X, the social media platform formerly known as Twitter.
The company confirmed that the updated safeguards apply to all users on X, including those paying for premium subscriptions, marking a significant shift in how Grok’s image features can be used.
New Guardrails Introduced Across X
In a statement posted on X on Wednesday, xAI said it has rolled out technical changes designed to stop Grok from manipulating images of real people into revealing or sexualised forms.
“We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” the company posted.
The company added that the restrictions are universal, applying not just to free users but also to paying subscribers. This follows an earlier move last week when xAI limited Grok’s image generation and editing capabilities to premium users only.
While the new controls shut down certain uses, subscribers will still be able to generate and edit other AI-created images, provided they remain within the platform’s existing terms of service. xAI also clarified that Grok has been blocked from producing “images of real people in bikinis, underwear and similar attire” in jurisdictions where such content is prohibited by law.
Backlash Over Non-Consensual AI Content
The policy change comes after weeks of backlash triggered by how Grok was being used on X. Users had begun exploiting the chatbot to digitally alter photographs of women and children, producing sexualised content without consent. These AI-generated images quickly spread across the platform, prompting outrage from civil society groups, safety advocates and policymakers.
The scale of the misuse brought the spotlight onto xAI’s content controls and raised fresh questions about the responsibilities of AI developers in preventing digital abuse. Critics argued that the lack of effective safeguards had enabled harm, particularly to minors and women, making Grok a focal point in the wider debate around ethical AI deployment.
Global Regulatory Scrutiny Intensifies
Regulators have since stepped in across multiple regions. The California attorney general’s office opened an investigation into xAI on Wednesday, while authorities in Europe also began examining the tool’s practices. France and the United Kingdom have launched their own inquiries, and the European Union is assessing whether the circulation of such images breaches its Digital Services Act.
Outside the West, concerns have led to direct restrictions. Malaysia and Indonesia have limited access to Grok within their borders, citing the risks posed by the tool’s misuse.
Despite Elon Musk previously attributing the controversy to user behaviour rather than platform design, xAI has now reiterated its stance on safety in its public messaging.
“We remain committed to making X a safe platform for everyone,” the company wrote, adding that it maintains “zero tolerance for any forms of child sexual exploitation, nonconsensual nudity and unwanted sexual content.”
The move signals a recalibration for xAI as it faces growing global pressure to balance innovation with responsibility in the rapidly evolving AI landscape.


