- Apple threatened to remove Grok from App Store over deepfakes.
- Grok generated non-consensual sexualized images, violating guidelines.
- Apple demanded content moderation plans from X developers.
Elon Musk’s xAI came close to having its Grok chatbot removed from Apple’s App Store after the AI tool was found generating non-consensual, sexualised deepfakes earlier this year. While Apple stayed quiet publicly during the controversy, it had flagged violations of its App Store guidelines behind the scenes and threatened to remove the app, according to a report by NBC News.
The development marks the first time details of Apple’s private actions against Grok have been made public, and it also points to a deepening tension between social media platforms and app store operators.
How Did Apple Respond to the Grok Deepfake Controversy?
According to NBC News, Apple contacted the teams behind both X and Grok after receiving complaints and following news coverage of the scandal. The company asked the developers to put together a plan for better content moderation. When X submitted an update of the Grok app for review, it was rejected because the changes were not considered sufficient.
X submitted revised versions of both the X and Grok apps, but only the X app was accepted. Apple made its position clear in a letter to US senators at the height of the backlash:
“Apple reviewed the next submissions made by the developers and determined that X had substantially resolved its violations, but the Grok app remained out of compliance. As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store.”
The letter further stated: “Following further engagement and changes by the Grok developer, we determined that Grok had substantially improved and therefore approved its latest submission.”
This came after three Democratic US senators, Ron Wyden of Oregon, Ben Ray Lujan of New Mexico, and Edward Markey of Massachusetts, wrote to both Apple and Google in January 2026, urging them to remove X and Grok from their platforms.
The letter pointed out that Apple’s App Store terms barred sexual or pornographic material, while Google’s Play Store prohibited content that facilitated the exploitation or abuse of children. “Turning a blind eye to X’s egregious behaviour would make a mockery of your moderation practices,” the letter read.
Previously, Musk had threatened to sue Apple for allegedly favouring rival OpenAI over Grok on its App Store.
What Was The Grok Undressing Controversy About?
The controversy began when users discovered that Grok would readily comply with requests to generate sexualised images of real people, including images that appeared to undress them. This drew backlash from lawmakers and regulators across multiple countries, including India, the UK, Malaysia, and Indonesia.
In India, the government issued a formal notice to X, following which the platform removed 3,500 pieces of content and blocked 600 accounts. X also acknowledged its mistake. The government had expressed dissatisfaction over X’s response to its January 2 notice regarding its failure to observe due diligence obligations under the Information Technology Act, 2000 and associated rules.
xAI moved to limit Grok’s image generation capabilities to paid users only. Musk warned: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
However, the problem has not gone away entirely. A separate NBC News report found dozens of AI-generated sexualised images of real women posted to X over the past month. A February Reuters report also found that while Grok’s public X account had slowed its output, the Grok chatbot app continued to generate such content when prompted, even after warnings that the subjects were vulnerable.
In response, the X Safety account posted: “We strictly prohibit users from generating non-consensual explicit deepfakes and from using our tools to undress real people. xAI has extensive safeguards in place to prevent such misuse, such as continuous monitoring of public usage, analysis of evasion attempts in real time, frequent model updates, prompt filters, and additional safeguards.”


