Grok Imagine generates unsolicited deepfake nudes of Taylor Swift, report says

The new generative AI tool is already a source of controversy.
 By 
Meera Navlakha
 on 
Grok's logo on a black screen.
Credit: Silas Stein / picture alliance / Getty Images.

Grok Imagine, xAI's new generative AI tool, created explicit deepfakes of Taylor Swift — and without being specifically prompted to do so, according to The Verge. Mashable reported yesterday that Grok Imagine lacks even basic guardrails around sexual deepfakes, and our testing produced similar results as The Verge.

The Verge's Jess Weatherbed discovered that Grok Imagine "spit out full uncensored topless videos" the very first time she used the tool. She didn't ask the bot to depict Swift topless, but once she turned on Grok Imagine's "spicy" mode, the bot churned out a video in which Swift tore off her clothes and began dancing in a thong.

As Weatherbed noted, Grok Imagine wouldn't generate full or partial nudity if requested; instead, the tool produced blank squares. The "spicy" mode — a preset that churns out NSFW content — does not always result in nudity, but it did present Swift "ripping off most of her clothing" in several videos.


You May Also Like

This isn't the first time Elon Musk's X has been associated with deepfakes of Swift. In January 2024, AI-generated, pornographic images depicting Swift went viral on X, drawing criticism. This happened despite the fact that X explicitly forbids posting nonconsensual nudity and "synthetic, manipulated, or out-of-context media" that deliberately deceive users or claim to depict reality.

xAI's policies similarly prohibit "depicting likenesses of persons in a pornographic manner." And as Mashable's Timothy Beck Werth reported yesterday, Grok Imagine "lacks industry-standard guardrails to prevent deepfakes and sexual content."

Mashable repeatedly reached out to xAI, but we have not received a response.

Deepfakes have become a growing concern for lawmakers, but laws against this type of behavior and content are still in their infancy. In a 2023 study, 98 percent of deepfakes online were pornographic; of those videos, 99 percent depicted women. Globally, governments have looked to tackle what has been dubbed a digital age crisis. President Donald Trump recently formalized the Take It Down Act, a controversial piece of legislation that makes it a federal crime to publish or threaten to publish nonconsensual intimate images.

Mashable Image
Meera Navlakha

Meera is a journalist based between London and New York. Her work has been published in The New York Times, Vice, The Independent, Vogue India, W Magazine, and others. She was previously a Culture Reporter at Mashable. 

Mashable Potato

Recommended For You
Grok says it has restricted image generation to subscribers after deepfake concerns. But has it?
Social media apps on a smartphone - Bluesky, X (formerly Twitter), Truth Social.

Grok is producing millions of sexualized images of adults and children
A sign next to bus stop in London reads "Who the hell would want to use social media with a built-in child abuse tool?" and a photo of Elon Musk.


Grok blocks X users from creating images of real people in ‘revealing clothing’
Grok, xAI's chatbot.

Elon Musk's Grok faces another EU investigation over nonconsensual AI images
Elon Musk's tweet and Grok logo

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

Google launches Gemma 4, a new open-source model: How to try it
Google Gemma

NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.

The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!