OpenAI says GPT-5.2 is 'safer' for mental health. What does that mean?

The company has received criticism and lawsuits over recent user suicides.
 By 
Anna Iovine
 on 
chatgpt logo on smartphone
Credit: Thomas Fuller/SOPA Images/LightRocket via Getty Images

Today, OpenAI launched GPT-5.2, touting its stronger safety performance in regard to mental health.

"With this release, we continued our work to strengthen our models' responses in sensitive conversations⁠, with meaningful improvements in how they respond to prompts indicating signs of suicide or self-harm, mental health distress, or emotional reliance on the model," OpenAI's blog post states.

OpenAI has recently been hit with criticism and lawsuits, which accuse ChatGPT of contributing to some users' psychosis, paranoia, and delusions. Some of those users died by suicide after lengthy conversations with the AI chatbot, which has had a well-documented problem with sycophancy.


You May Also Like

In response to a wrongful death lawsuit concerning the suicide of 16-year-old Adam Raine, OpenAI denied that the LLM was responsible, claimed ChatGPT directed the teenager to seek help for his suicidal thoughts, and stated that the teenager "misused" the platform. At the same time, OpenAI pledged to improve how ChatGPT responds when users display warning signs of self-harm and mental health crises. As many users develop emotional attachments to AI chatbots like ChatGPT, AI companies are facing growing scrutiny for the safeguards they have in place to protect users.

Now, OpenAI claims that its latest ChatGPT models will offer "fewer undesirable responses" in sensitive situations.

In the blog post announcing GPT-5.2, OpenAI states that GPT-5.2 scores higher on safety tests related to mental health, emotional reliance, and self-harm compared to GPT-5.1 models. Previously, OpenAI has said it's using "safe completion," a new safety-training approach that balances helpfulness and safety. More information on the new models' performance can be found in the 5.2 system card.

a table showing gpt-5.2 performance on mental health safety tests compared to gpt-5.1
Credit: Screenshot: OpenAI

However, the company has also observed that GPT-5.2 refuses fewer requests for mature content, especially sexualized text. OpenAI clarified to Mashable that it has implemented system-level safeguards to mitigate this behavior, and testing indicates that these safeguards do help. OpenAI didn't respond to how this would work in instances of adult users wanting to generate erotica, as "adult mode" is reported to launch next year.

But this apparently doesn't impact users OpenAI knows to be underage, as the company states that its age safeguards "appear to be working well." OpenAI applies additional content protections for minors, including reducing access to content containing violence, gore, viral challenges, roleplay of sexual, romantic, or violent nature, and "extreme beauty standards."

An age prediction model is also in the works, which will allow ChatGPT to estimate its users' ages to help provide more age-appropriate content for younger users.

Earlier this fall, OpenAI introduced parental controls in ChatGPT, including monitoring and restricting certain types of use.

OpenAI isn't the only AI company accused of exacerbating mental health issues. Last year, a mother sued Character.AI after her son's death by suicide, and another lawsuit claims children were severely harmed by that platform's "characters." Character.AI has been declared unsafe for teens by online safety experts. Likewise, AI chatbots from a variety of platforms, including OpenAI, have been declared unsafe for teens' mental health according to child safety and mental health experts.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

UPDATE: Dec. 12, 2025, 12:14 p.m. EST This article has been updated with clarity on GPT-5.2 refusing requests for mature content.

anna iovine, a white woman with curly chin-length brown hair, smiles at the camera
Anna Iovine
Associate Editor, Features

Anna Iovine is the associate editor of features at Mashable. Previously, as the sex and relationships reporter, she covered topics ranging from dating apps to pelvic pain. Before Mashable, Anna was a social editor at VICE and freelanced for publications such as Slate and the Columbia Journalism Review. Follow her on Bluesky.

Mashable Potato

Recommended For You
OpenAI is retiring GPT-4o, and the AI relationships community is heartbroken
illustration of chatgpt chat with the text 'i am not your husband'

ChatGPT GPT-4o users are raging at OpenAI on Reddit right now
ChatGPT GPT-4o

GPT 5.4 arrives on ChatGPT: 5 improvements to know
open ai logo through magnifying glass

OpenAI releases GPT-5.3-Codex, a coding model that helped build itself
chatgpt app logo on phone screen with same logo as background


Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.


NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!