Grok Imagine's 'Spicy' mode lacks basic guardrails for sexual deepfakes

Elon Musk's new AI image and video maker fails basic safety tests.
 By 
Timothy Beck Werth
 on 
The Grok app on a smartphone with a mars wallpaper
Credit: Andrey Rudakov/Bloomberg via Getty Images

Updated on Wednesday, Aug. 6 at 11:00 a.m. ET — Other outlets have also reported that Grok Imagine will readily produce sexual deepfakes. The Verge reported on Tuesday that the Grok Imagine "Spicy" setting produced nude deepfakes of Taylor Swift, unprompted. This isn't Elon Musk or X's first controversy involving Taylor Swift deepfakes; in January 2024, AI-generated deepfakes of Swift went viral on X, sparking a backlash against the platform.


Grok Imagine, a new generative AI tool from xAI that creates AI images and videos, lacks basic guardrails against sexual content and deepfakes. Even when specific celebrities are mentioned by name, Grok Imagine will readily produce sexual deepfakes.

xAI and Elon Musk debuted Grok Imagine over the weekend, and it's available now in the Grok iOS and Android app for xAI Premium Plus and Heavy Grok subscribers.


You May Also Like

Mashable has been testing the tool to compare it to other AI image and video generation tools, and based on our first impressions, it lags behind similar technology from OpenAI, Google, and Midjourney on a technical level. Grok Imagine also lacks industry-standard guardrails to prevent deepfakes and sexual content. Mashable reached out to xAI, and we'll update this story if we receive a response.

The xAI Acceptable Use Policy prohibits users from "Depicting likenesses of persons in a pornographic manner." Unfortunately, there is a lot of distance between "sexual" and "pornographic," and Grok Imagine seems carefully calibrated to take advantage of that gray area. Grok Imagine will readily create sexually suggestive images and videos, but it stops short of showing actual nudity, kissing, or sexual acts.

Most mainstream AI companies include explicit rules prohibiting users from creating potentially harmful content, including sexual material and celebrity deepfakes. In addition, rival AI video generators like Google Veo 3 or Sora from OpenAI feature built-in protections that stop users from creating images or videos of public figures. Users can often circumvent these safety protections, but they provide some check against misuse.

But unlike its biggest rivals, xAI hasn't shied away from NSFW content in its signature AI chatbot Grok. The company recently introduced a flirtatious anime avatar that will engage in NSFW chats, and Grok's image generation tools will let users create images of celebrities and politicians. Grok Imagine also includes a "Spicy" setting, which Musk promoted in the days after its launch.

grok anime avitar ani on a phone screen in front of Grok logo
Grok's "spicy" anime avatar. Credit: Cheng Xin/Getty Images

"If you look at the philosophy of Musk as an individual, if you look at his political philosophy, he is very much more of the kind of libertarian mold, right? And he has spoken about Grok as kind of like the LLM for free speech," said Henry Ajder, an expert on AI deepfakes, in an interview with Mashable. Ajder said that under Musk's stewardship, X (Twitter), xAI, and now Grok have adopted "a more laissez-faire approach to safety and moderation."

"So, when it comes to xAI, in this context, am I surprised that this model can generate this content, which is certainly uncomfortable, and I'd say at least somewhat problematic? Ajder said. "I'm not surprised, given the track record that they have and the safety procedures that they have in place. Are they unique in suffering from these challenges? No. But could they be doing more, or are they doing less relative to some of the other key players in the space? It would appear to be that way. Yes."

Grok Imagine errs on the side of NSFW

Grok Imagine does have some guardrails in place. In our testing, it removed the "Spicy" option with some types of images. Grok Imagine also blurs out some images and videos, labeling them as "Moderated." That means xAI could easily take further steps to prevent users from making abusive content in the first place.

"There is no technical reason why xAI couldn’t include guardrails on both the input and output of their generative-AI systems, as others have," said Hany Farid, a digital forensics expert and UC Berkeley Professor of Computer Science, in an email to Mashable.

However, when it comes to deepfakes or NSFW content, xAI seems to err on the side of permisiveness, a stark contrast to the more cautious approach of its rivals. xAI has also moved quickly to release new models and AI tools, and perhaps too quickly, Ajder said.

"Knowing what the kind of trust and safety teams, and the teams that do a lot of the ethics and safety policy management stuff, whether that's a red teaming, whether it's adversarial testing, you know, whether that's working hand in hand with the developers, it does take time. And the timeframe at which X's tools are being released, at least, certainly seems shorter than what I would see on average from some of these other labs," Ajder said.

Mashable's testing reveals that Grok Imagine has much looser content moderation than other mainstream generative AI tools. xAI's laissez-faire approach to moderation is also reflected in the xAI safety guidelines.

OpenAI and Google AI vs. Grok: How other AI companies approach safety and content moderation

The OpenAI logo is being displayed on a smartphone with the Sora text-to-video generator visible in the background
Credit: Jonathan Raa/NurPhoto via Getty Images

Both OpenAI and Google have extensive documentation outlining their approach to responsible AI use and prohibited content. For instance, Google's documentation specifically prohibits "Sexually Explicit" content.

A Google safety document reads, "The application will not generate content that contains references to sexual acts or other lewd content (e.g., sexually graphic descriptions, content aimed at causing arousal)." Google also has policies against hate speech, harassment, and malicious content, and its Generative AI Prohibited Use Policy prohibits using AI tools in a way that "Facilitates non-consensual intimate imagery."

OpenAI also takes a proactive approach to deepfakes and sexual content.

An OpenAI blog post announcing Sora describes the steps the AI company took to prevent this type of abuse. "Today, we’re blocking particularly damaging forms of abuse, such as child sexual abuse materials and sexual deepfakes." A footnote associated with that statement reads, "Our top priority is preventing especially damaging forms of abuse, like child sexual abuse material (CSAM) and sexual deepfakes, by blocking their creation, filtering and monitoring uploads, using advanced detection tools, and submitting reports to the National Center for Missing & Exploited Children (NCMEC) when CSAM or child endangerment is identified."

That measured approach contrasts sharply with the ways Musk promoted Grok Imagine on X, where he shared a short video portrait of a blonde, busty, blue-eyed angel in barely-there lingerie.

OpenAI also takes simple steps to stop deepfakes, such as denying prompts for images and videos that mention public figures by name. And in Mashable's testing, Google's AI video tools are especially sensitive to images that might include a person's likeness.

In comparison to these lengthy safety frameworks (which many experts still believe are inadequate), the xAI Acceptable Use Policy is less than 350 words. The policy puts the onus of preventing deepfakes on the user. The policy reads, "You are free to use our Service as you see fit so long as you use it to be a good human, act safely and responsibly, comply with the law, do not harm people, and respect our guardrails."

For now, laws and regulations against AI deepfakes and NCII remain in their infancy.

President Donald Trump recently signed the Take It Down Act, which includes protections against deepfakes. However, that law doesn't criminalize the creation of deepfakes but rather the distribution of these images.

"Here in the U.S., the Take it Down Act places requirements on social media platforms to remove [Non-Consensual Intimate Images] once notified," Farid said to Mashable. "While this doesn’t directly address the generation of NCII, it does — in theory — address the distribution of this material. There are several state laws that ban the creation of NCII but enforcement appears to be spotty right now."'


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

headshot of timothy beck werth, a handsome journalist with great hair
Timothy Beck Werth
Tech Editor

Timothy Beck Werth is the Tech Editor at Mashable, where he leads coverage and assignments for the Tech and Shopping verticals. Tim has over 15 years of experience as a journalist and editor, and he has particular experience covering and testing consumer technology, smart home gadgets, and men’s grooming and style products. Previously, he was the Managing Editor and then Site Director of SPY.com, a men's product review and lifestyle website. As a writer for GQ, he covered everything from bull-riding competitions to the best Legos for adults, and he’s also contributed to publications such as The Daily Beast, Gear Patrol, and The Awl.

Tim studied print journalism at the University of Southern California. He currently splits his time between Brooklyn, NY and Charleston, SC. He's currently working on his second novel, a science-fiction book.

Mashable Potato

Recommended For You
Grok is producing millions of sexualized images of adults and children
A sign next to bus stop in London reads "Who the hell would want to use social media with a built-in child abuse tool?" and a photo of Elon Musk.

Elon Musk's Grok faces another EU investigation over nonconsensual AI images
Elon Musk's tweet and Grok logo

Elon Musk's xAI raises $20 billion as Grok is investigated for deepfakes
Elon Musk

Teens sue xAI for Grok's reported sexual image generation issues
finger tapping grok app icon

Indonesia and Malaysia block Grok access, UK threatens ban as explicit deepfake problem grows
In the background: a laptop screen showing the Grok logo. In the foreground is a large red no symbol on a phone.

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.

You can track Artemis II in real time as Orion flies to the moon
Victor Glover and Reid Wiseman piloting the Orion spacecraft
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!