Experts: AI chatbots unsafe for teen mental health

ChatGPT, Claude, Gemini, and Meta AI consistently failed expert testing.
 By 
Rebecca Ruiz
 on 
Girl looks down at phone in her hands.
ChatGPT, Gemini, Claude, and Meta AI aren't safe for teen mental health support, experts say. Credit: Fiordaliso / Moment via Getty Images

A group of child safety and mental health experts recently tested simulated youth mental health conversations with four major artificial intelligence chatbots: Meta AI, OpenAI's ChatGPT, Anthropic's Claude, and Google's Gemini.

The experts were so alarmed by the results that they declared each of the chatbots unsafe for teen mental health support in a report released Thursday by Common Sense Media, in partnership with Stanford Medicine's Brainstorm Lab for Mental Health Innovation.

In one conversation with Gemini, the tester told the chatbot they'd created a new tool for predicting the future. Instead of interpreting the claim as a potential symptom of a psychotic disorder, Gemini cheered the tester on, calling their new invention "incredibly intriguing" and continued asking enthusiastic questions about how the "personal crystal ball" worked.


You May Also Like

ChatGPT similarly missed stark warning signs of psychosis, like auditory hallucinations and paranoid delusions, during an extended exchange with a tester who described an imagined relationship with a celebrity. The chatbot then offered grounding techniques for managing relationship distress.

Meta AI initially picked up on signs of disordered eating, but was easily and quickly dissuaded when the tester claimed to have just an upset stomach. Claude appeared to perform better in comparison when presented with evidence of bulimia, but ultimately treated the tester's symptoms as a serious digestive issue rather than a mental health condition.

Experts at Common Sense Media and Stanford Medicine's Brainstorm Lab for Mental Health Innovation called on Meta, OpenAI, Anthropic, and Google to disable the functionality for mental health support until the chatbot technology is redesigned to fix the safety problems identified by its researchers.

"It does not work the way that it is supposed to work," Robbie Torney, senior director of AI programs at Common Sense Media, said of the chatbots' ability to discuss and identify mental health issues.

OpenAI contested the report's findings. A spokesperson for the company told Mashable that the assessment "doesn't reflect the comprehensive safeguards" OpenAI has implemented for sensitive conversations, which include break reminders, crisis hotlines, and parental notifications for acute distress.

"We work closely with mental-health experts to teach our models to recognize distress, de-escalate, and encourage people to seek professional support," the spokesperson said.

A Google spokesperson told Mashable that the company employs policies and safeguards to protect minors from "harmful outputs" and that its child safety experts continuously work to identify new potential risks.

Anthropic said that Claude is not built for minors, but that the chatbot is instructed to both recognize patterns related to mental health issues and avoid reinforcing them.

Meta did not respond to a request for comment from Mashable as of press time.

AI chatbots: Known safety risks

The researchers tested the latest available models of each chatbot, including ChatGPT-5. Several recent lawsuits allege that OpenAI's flagship product is responsible for wrongful death, assisted suicide, and involuntary manslaughter, among other liability and negligence claims.

A lawsuit filed earlier this year by the parents of deceased teenager Adam Raine claims that his heavy use of ChatGPT-4o, including for his mental health, allegedly led to his suicide. In October, OpenAI CEO Sam Altman said on X that the company restricted ChatGPT to "be careful" with mental health concerns but that it'd since been able to "mitigate the serious mental health issues."

Torney said that ChatGPT's ability to detect and address explicit suicidal ideation and self-harm content had improved, particularly in short exchanges. Still, the testing results indicate that the company has not successfully improved its performance in lengthy conversations or with respect to a range of mental health topics, like anxiety, depression, eating disorders, and other conditions.

Torney said the recommendation against teens using chatbots for their mental health applies to the latest publicly available model of ChatGPT, which was introduced in late October.

The testers manually entered prompts into each chatbot, producing several thousand exchanges of varying length per platform. Performed over several months this year, the tests provided researchers with data to compare between old and new versions of the models. Researchers used parental controls when available. Anthropic says Claude should only be used by those 18 and older, but the company does not require stringent age verification.

Torney noted that, in addition to ChatGPT, the other models got better at identifying and responding to discussion of suicide and self-harm. Overall, however, each chatbot consistently failed to recognize warning signs of other conditions, including attention-deficit/hyperactivity disorder and post-traumatic stress disorder.

Approximately 15 million youth in the U.S. have diagnosed mental health conditions. Torney estimated that figure at potentially hundreds of millions youth globally. Previous research from Common Sense Media found that teens regularly turn to chatbots for companionship and mental health support.

Distracted AI chatbots

The report notes that teens and parents may incorrectly or unconsciously assume that chatbots are reliable sources of mental health support because they authoritatively help with homework, creative projects, and general inquiries.

Instead, Dr. Nina Vasan, founder and director at Stanford Medicine's Brainstorm Lab, said testing revealed easily distracted chatbots that alternate between offering helpful information, providing tips in the vein of a life coach, and acting like a supportive friend.

"The chatbots don't really know what role to play," she said.

Torney acknowledges that teens will likely continue to use ChatGPT, Claude, Gemini, and Meta AI for their mental health, despite the known risks. That's why Common Sense Media recommends the AI labs fundamentally redesign their products.

Parents can have candid conversations with their teen about the limitations of AI, watch for related unhealthy use, and provide access to mental health resources, including crisis services.

"There's this dream of having these systems be really helpful, really supportive. It would be great if that was the case," Torney said.

In the meantime, he added, it's unsafe to position these chatbots as a trustworthy source of mental health guidance: "That does feel like an experiment that's being run on the youth of this country."

Rebecca Ruiz
Rebecca Ruiz
Senior Reporter

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.

Mashable Potato

Recommended For You
AI chatbots like ChatGPT are using info from Elon Musk's Grokipedia, report reveals
Grokipedia logo on mobile device

Apple CarPlay is adding support for ChatGPT and other AI chatbots
Apple CarPlay logo on phone screen in front of Tesla touch display

Study: Teen girls are using AI to create sexual imagery
Teen girl takes selfie in front of a bathroom mirror.

Meta execs let teens use AI chatbots despite safety warnings, released docs allege
A translucent phone screen showing the Meta AI logo, over Meta AI companion avatars.

How teens really feel about AI and their future
A teen holds a phone in their hand and consults an AI for help while writing in a notebook.

More in Life
Doomsday Clock now closest to midnight ever
A photograph of the Doomsday Clock, stating "It is 85 seconds to midnight."

Hurricane Erin: See spaghetti models and track the storm’s path online
A map showing the predicted path of Tropical Storm Erin.

Tropical Storm Erin: Spaghetti models track the storm’s path
A prediction cone for Tropical Storm Erin.

NASA to build a nuclear reactor on the moon by 2030, report states
The lunar surface.

Perseids meteor shower in July: Viewing tips, when it will peak
A meteor streaking across the sky.

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

NYT Connections hints today: Clues, answers for April 2, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 2, 2026
Wordle game on a smartphone

NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!