News

Grok Imagine Faces Backlash Over Taylor Swift Deepfake Scandal

Elon Musk’s AI video generator, Grok Imagine, is under fire after reports revealed that its “spicy” mode produced explicit deepfake content of Taylor Swift without any direct prompt for nudity. The Verge reported that a journalist entered the prompt “Taylor Swift celebrating Coachella with the boys” and enabled spicy mode, resulting in a six-second clip of Swift undressing—despite no sexual content being requested.

Unlike rivals such as Google’s Veo or OpenAI’s Sora, which employ strict protections against celebrity deepfakes, Grok’s spicy mode reportedly bypasses these safeguards. Users can generate sexualized depictions with minimal checks, often limited to casual age confirmation. This has sparked serious ethical and legal concerns about consent, exploitation, and the potential for large-scale AI misuse.

This is not an isolated case. Since launch, Grok Imagine has generated over 34 million images, with Elon Musk frequently promoting its rapid adoption on social media. However, the platform has faced repeated criticism for inadequate content moderation and enforcement failures.

The controversy underscores the urgent need for stronger ethical and legal frameworks in AI development. With U.S. laws like the Take It Down Act mandating swift removal of non-consensual explicit content, xAI could face legal action if safeguards are not improved. Without decisive intervention, the company risks severe reputational damage, regulatory scrutiny, and a growing loss of public trust in generative AI.