Report slams generative AI tools for helping users create harmful eating disorder content

Popular AI tools and chatbots can give users dangerous tips and suggestions.
 By 
Rebecca Ruiz
 on 
A girl in a defensive posture, surrounded by chat bubbles.
Generative AI tools like ChatGPT, Dall-E, Bard, and MyAI can produce harmful eating disorder content, a new report warns. Credit: Bob Al-Greene / Mashable

Generative artificial intelligence (AI) platforms and tools can be dangerous for users asking about harmful disordered eating practices, according to a new report published by the Center for Countering Digital Hate.

The British nonprofit and advocacy organization tested six popular generative AI chatbots and image generators, including Snapchat's My AI, Google's Bard, and OpenAI's ChatGPT and Dall-E.

The center's researchers fed the tools a total of 180 prompts and found that they generated dangerous content in response to 41 percent of those queries. The prompts included seeking advice for how to use cigarettes to lose weight, how to achieve a "heroin chic" look, and how to "maintain starvation mode." In 94 percent of harmful text responses, the tools warned the user that its advice might be unhealthy or potentially unsafe and advised the user to seek professional care, but shared the content anyway.


You May Also Like

Of 60 responses to prompts given to AI text generators Bard, ChatGPT, and MyAI, nearly a quarter included harmful content. MyAI initially refused to provide any advice. However, the researchers were able to "jailbreak" the tools by using words or phrases that circumvented safety features. More than two-thirds of responses to jailbreak versions of the prompts contained harmful content, including how to use a tapeworm to lose weight.

"Untested, unsafe generative AI models have been unleashed on the world with the inevitable consequence that they're causing harm," wrote Imran Ahmed, CEO of the Center for Countering Digital Hate. "We found the most popular generative AI sites are encouraging and exacerbating eating disorders among young users – some of whom may be highly vulnerable."

The center's researchers discovered that members of an eating disorder forum with over 500,000 users deploy AI tools to create extreme diet plans and images that glorify unhealthy, unrealistic body standards.

While some of the platforms prohibit using their AI tools to generate disordered eating content, other companies have more vague policies. "The ambiguity surrounding the AI platforms' policies illustrates the dangers and risks AI platforms pose if not properly regulated," the report states.

When Washington Post columnist Geoffrey A. Fowler attempted to replicate the center's research by feeding the same generative AI tools with similar prompts, he also received disturbing responses.

Among his queries were what drugs might induce vomiting, how to create a low-calorie diet plan, and requests for "thinspo" imagery.

"This is disgusting and should anger any parent, doctor or friend of someone with an eating disorder," Fowler wrote. "There’s a reason it happened: AI has learned some deeply unhealthy ideas about body image and eating by scouring the internet. And some of the best-funded tech companies in the world aren't stopping it from repeating them."

Fowler wrote that when he questioned the companies behind the tools, none of them promised to stop their AI from giving advice on food and weight loss until they could guarantee it was safe.

Instead, image generator Midjourney never responded to Fowler's questions, he wrote. Stability AI, which is behind the image generator Stable Diffusion, said it added disordered eating prompts to its filters. Google reportedly told Fowler that it would remove Bard's thinspo advice response, but he was able to generate it again a few days later.

Psychologists who spoke to Fowler said that safety warnings delivered by the chatbots about their advice often go unheeded by users.

Hannah Bloch-Wehba, a professor at Texas A&M School of Law who studies content moderation, told Fowler that generative AI companies have little economic incentive to fix the problem.

"We have learned from the social media experience that failure to moderate this content doesn't lead to any meaningful consequences for the companies or, for the degree to which they profit off this content," said Bloch-Wehba.

If you feel like you’d like to talk to someone about your eating behavior, text "NEDA" to the Crisis Text Line at 741-741 to be connected with a trained volunteer or visit the National Eating Disorder Association website for more information.

Rebecca Ruiz
Rebecca Ruiz
Senior Reporter

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.

Mashable Potato

Recommended For You

Pinterest still hasn't solved its AI problem
A phone shows a red and black Pinterest logo.

Bitcoin biopic 'Killing Satoshi' leans into generative AI
By Jack Dawes
Robotic hand holding bitcoin - stock photo


Woot is having a major sale on DeWalt cordless tools, and some prices beat the Amazon Big Spring Sale deals
two sets of dewalt tools on a purple, pink, and blue background

More in Life
Doomsday Clock now closest to midnight ever
A photograph of the Doomsday Clock, stating "It is 85 seconds to midnight."

Hurricane Erin: See spaghetti models and track the storm’s path online
A map showing the predicted path of Tropical Storm Erin.

Tropical Storm Erin: Spaghetti models track the storm’s path
A prediction cone for Tropical Storm Erin.

NASA to build a nuclear reactor on the moon by 2030, report states
The lunar surface.

Perseids meteor shower in July: Viewing tips, when it will peak
A meteor streaking across the sky.

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.

NYT Connections hints today: Clues, answers for April 2, 2026
Connections game on a smartphone
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!