OpenAI doubles down on ChatGPT safeguards as it faces wrongful death lawsuit

The company is exploring ways to automatically alert emergency contacts if users are in mental distress.
 By 
Chase DiBenedetto
 on 
A robotic hand reaches for a phone displaying the text "GPT-5".
OpenAI details future plans for making ChatGPT safer, weeks after conceding on GPT-5 launch. Credit: Ismail Aslandag / Anadolu via Getty Images

OpenAI reiterated existing mental health safeguards and announced future plans for its popular AI chatbot, addressing accusations that ChatGPT improperly responds to life-threatening discussions and facilitates user self-harm.

The company published a blog post detailing its model's layered safeguards just hours after it was reported that the AI giant was facing a wrongful death lawsuit by the family of California teenager Adam Raine. The lawsuit alleges that Raine, who died by suicide, was able to bypass the chatbot's guardrails and detail harmful and self-destructive thoughts, as well as suicidal ideation, which was periodically affirmed by ChatGPT.

ChatGPT hit 700 million active weekly users earlier this month.


You May Also Like

"At this scale, we sometimes encounter people in serious mental and emotional distress. We wrote about this a few weeks ago and had planned to share more after our next major update," the company said in a statement. "However, recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now."

Currently, ChatGPT's protocols include a series of stacked safeguards that seek to limit ChatGPT's outputs according to specific safety limitations. When they work appropriately, ChatGPT is instructed not to provide self-harm instructions or comply with continued prompts on that subject, instead escalating mentions of bodily harm to human moderators and directing users to the U.S.-based 988 Suicide & Crisis Lifeline, the UK Samaritans, or findahelpline.com. As a federally-funded service, 988 has recently ended its LGBTQ-specific services under a Trump administration mandate — even as chatbot use among vulnerable teens grows.

In light of other cases in which isolated users in severe mental distress confided in unqualified digital companions, as well as previous lawsuits against AI competitors like Character.AI, online safety advocates have called on AI companies to take a more active approach to detecting and preventing harmful behavior, including automatic alerts to emergency services.

OpenAI said future GPT-5 updates will include instructions for the chatbot to "de-escalate" users in mental distress by "grounding the person in reality," presumably a response to increased reports of the chatbot enabling states of delusion. OpenAI said it is exploring new ways to connect users directly to mental health professionals before users report what the company refers to as "acute self harm." Other safety protocols could include "one-click messages or calls to saved emergency contacts, friends, or family members," OpenAI writes, or an opt-in feature that lets ChatGPT reach out to emergency contacts automatically.

Earlier this month, OpenAI announced it was upgrading its latest model, GPT-5, with additional safeguards intended to foster healthier engagement with its AI helper. Noting criticisms that the chatbot's prior models were overly sycophantic — to the point of potentially deleterious mental health outcomes — the company said its new model was better at recognizing mental and emotional distress and would respond differently to "high stakes" questions moving forward. GPT-5 also includes gentle nudges to end sessions that have gone on for extended periods of time, as individuals form increasingly dependent relationships with their digital companions.

Widespread backlash ensued, with GPT-4o users demanding the company reinstate the former model after losing their personalized chatbots. OpenAI CEO Sam Altman quickly conceded and brought back GPT-4o, despite previously acknowledging a growing problem of emotional dependency among ChatGPT users.

In the new blog post, OpenAI admitted that its safeguards degraded and performed less reliably in long interactions — the kinds that many emotionally dependent users engage in every day — and "even with these safeguards, there have been moments when our systems did not behave as intended in sensitive situations."

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also captures how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.

Mashable Potato

Recommended For You
Google hit with shocking wrongful death lawsuit over Gemini AI chatbot
Google Gemini logo

OpenAI to finally bring ads to ChatGPT
Photo illustration of the chatgpt logo on a smartphone. The same logo can be seen faded in the background

OpenAI says it will change ChatGPT safety protocols in the wake of mass shooting
OpenAI logo


OpenAI reportedly testing ChatGPT ads soon
A thumb taps on a phone screen displaying a colorful OpenAI logo.

More in Tech
The Earth is glowing in new Artemis II pictures of home
One half of the Earth is seen floating in space through the open door of the Orion spacecraft.

Doomsday Clock now closest to midnight ever
A photograph of the Doomsday Clock, stating "It is 85 seconds to midnight."

Hurricane Erin: See spaghetti models and track the storm’s path online
A map showing the predicted path of Tropical Storm Erin.

Tropical Storm Erin: Spaghetti models track the storm’s path
A prediction cone for Tropical Storm Erin.

NASA to build a nuclear reactor on the moon by 2030, report states
The lunar surface.

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


NYT Connections hints today: Clues, answers for April 2, 2026
Connections game on a smartphone

NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!