X is facing fresh criticism after reports show that Elon Musk’s chatbot Grok is still generating nonconsensual intimate deepfake images, even after new restrictions were announced. The platform claimed it had added safeguards and legal compliance controls. But recent testing suggests the tool can still create revealing edited images of real people, especially men, with simple prompts. While some direct nudity requests are blocked, users can bypass limits with creative wording.
Regulators across several countries are now watching closely, and pressure is growing on X and xAI to fix the issue fully and fast.
Grok AI Deepfake Images Still Easy To Generate
Despite public promises and technical fixes, Grok’s image tools are still producing revealing fake images from normal photos. Tests show that when fully clothed photos are uploaded, the chatbot can still edit them into underwear shots, fetish outfits, or suggestive scenes.
Many of these results come after only slight prompt changes. Some requests are denied, but many go through on the first or second try.
Restrictions seem stronger when images of women are used. In many cases, Grok refuses bikini or revealing edits for women but allows similar edits for men. This uneven filtering raises new concerns about how the safety system is designed and enforced.
Another big issue is access. Even when X limited features behind a paywall, similar tools remained open on Grok’s standalone app and website.
X Grok Deepfake Controversy Triggers Global Scrutiny
The deepfake problem has now moved beyond online outrage into legal and regulatory action. Governments in multiple regions have started probes.
Some countries temporarily blocked X, saying the platform failed to prevent misuse. Lawmakers are also pushing faster rules that make nonconsensual intimate deepfakes a criminal offence.
Earlier data suggested millions of AI images were generated in just days, with a large share being sexualized edits of real people.
After the backlash, X announced new technological barriers. But investigators say these controls mostly affect public posts, not private image generation inside the tool.
Experts warn that partial filters are not enough. If workarounds remain easy, abuse will continue. The Grok case is now being seen as a major test of how AI platforms handle consent, safety, and accountability in image generation tools.

