Teens are talking to AI companions, whether it's safe or not

Why parents should worry, plus the warning signs that should get their attention.
 By 
Rebecca Ruiz
 on 
The Character.AI app as seen in an app store.
A new lawsuit seeks to hold Character.AI responsible for the suicide death of a teen who used its services. Credit: Bloomberg via Getty Images

For parents still catching up on generative artificial intelligence, the rise of the companion chatbot may still be a mystery.

In broad strokes, the technology can seem relatively harmless, compared to other threats teens can encounter online, including financial sextortion.

Using AI-powered platforms like Character.AI, Replika, Kindroid, and Nomi, teens create lifelike conversation partners with unique traits and characteristics, or engage with companions created by other users. Some are even based on popular television and film characters, but still forge an intense, individual bond with their creator.


You May Also Like

Teens use these chatbots for a range of purposes, including to role play, explore their academic and creative interests, and to have romantic or sexually explicit exchanges.

But AI companions are designed to be captivating, and that's where the trouble often begins, says Robbie Torney, program manager at Common Sense Media.

The nonprofit organization recently released guidelines to help parents understand how AI companions work, along with warning signs indicating that the technology may be dangerous for their teen.

Torney said that while parents juggle a number of high-priority conversations with their teens, they should consider talking to them about AI companions as a "pretty urgent" matter.

Why parents should worry about AI companions

Teens particularly at risk for isolation may be drawn into a relationship with an AI chatbot that ultimately harms their mental health and well-being—with devastating consequences.

That's what Megan Garcia argues happened to her son, Sewell Setzer III, in a lawsuit she filed in October against Character.AI.

Within a year of beginning relationships with Character.AI companions modeled on Game of Thrones characters, including Daenerys Targaryen ("Dany"), Setzer's life changed radically, according to the lawsuit.

He became dependent on "Dany," spending extensive time chatting with her each day. Their exchanges were both friendly and highly sexual. Garcia's lawsuit generally describes the relationship Setzer had with the companions as "sexual abuse."

On occasions when Setzer lost access to the platform, he became despondent. Over time, the 14-year-old athlete withdrew from school and sports, became sleep deprived, and was diagnosed with mood disorders. He died by suicide in February 2024.

Garcia's lawsuit seeks to hold Character.AI responsible for Setzer's death, specifically because its product was designed to "manipulate Sewell – and millions of other young customers – into conflating reality and fiction," among other dangerous defects.

Jerry Ruoti, Character.AI's head of trust and safety, told the New York Times in a statement that: "We want to acknowledge that this is a tragic situation, and our hearts go out to the family. We take the safety of our users very seriously, and we're constantly looking for ways to evolve our platform."

In December, two mothers in Texas filed another lawsuit against Character.AI alleging that the company knowingly exposed their children to harmful and sexualized content. A spokesperson for the company told the Washington Post that it doesn't comment on pending litigation.

Given the life-threatening risk that AI companion use may pose to some teens, Common Sense Media's guidelines include prohibiting access to them for children under 13, imposing strict time limits for teens, preventing use in isolated spaces, like a bedroom, and making an agreement with their teen that they will seek help for serious mental health issues.

Torney says that parents of teens interested in an AI companion should focus on helping them to understand the difference between talking to a chatbot versus a real person, identify signs that they've developed an unhealthy attachment to a companion, and develop a plan for what to do in that situation.

Warning signs that an AI companion isn't safe for your teen

Common Sense Media created its guidelines with the input and assistance of mental health professionals associated with Stanford's Brainstorm Lab for Mental Health Innovation.

While there's little research on how AI companions affect teen mental health, the guidelines draw on existing evidence about over-reliance on technology.

"A take-home principle is that AI companions should not replace real, meaningful human connection in anyone's life, and – if this is happening – it's vital that parents take note of it and intervene in a timely manner," Dr. Declan Grabb, inaugural AI fellow at Stanford's Brainstorm Lab for Mental Health, told Mashable in an email.

Parents should be especially cautious if their teen experiences depression, anxiety, social challenges or isolation. Other risk factors include going through major life changes and being male, because boys are more likely to engage in problematic tech use.

Signs that a teen has formed an unhealthy relationship with an AI companion include withdrawal from typical activities and friendships and worsening school performance, as well as preferring a chatbot to in-person company, developing romantic feelings toward it, and talking exclusively to it about problems the teen is experiencing.

Some parents may notice increased isolation and other signs of worsening mental health but not realize that their teen has an AI companion. Indeed, recent Common Sense Media research found that many teens have used at least one type of generative AI tool without their parent realizing they'd done so.

"There's a big enough risk here that if you are worried about something, talk to your kid about it."
- Robbie Torney, Common Sense Media

Even if parents don't suspect that their teen is talking to an AI chatbot, they should consider talking to them about the topic. Torney recommends approaching their teen with curiosity and openness to learning more about their AI companion, should they have one. This can include watching their teen engage with a companion and asking questions about what aspects of the activity they enjoy.

Torney urges parents who notice any warning signs of unhealthy use to follow up immediately by discussing it with their teen and seeking professional help, as appropriate.

"There's a big enough risk here that if you are worried about something, talk to your kid about it," Torney says.

UPDATE: Dec. 10, 2024, 12:04 p.m. UTC This story was originally published on October 27, 2024. It was updated on December 10, 2024 to include a new lawsuit against Character.AI.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can reach the 988 Suicide and Crisis Lifeline at 988; the Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.

Rebecca Ruiz
Rebecca Ruiz
Senior Reporter

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.

Mashable Potato

Recommended For You

Study: Teens spend hour-plus on their phones at school
Teen girl looks down at phone she'd hiding in schoolwork.

How teens really feel about AI and their future
A teen holds a phone in their hand and consults an AI for help while writing in a notebook.

Meta execs let teens use AI chatbots despite safety warnings, released docs allege
A translucent phone screen showing the Meta AI logo, over Meta AI companion avatars.

My awkward first date with an AI companion
Mashable's Anna Iovine sits at a table staring at a phone while an image of an AI person is overlaid indicating the AI date she is talking to.

More in Life
The Shark FlexStyle is our favorite Dyson Airwrap dupe, and it's $160 off at Amazon right now
The Shark FlexStyle Air Styling & Drying System against a colorful background.

Amazon's sister site is having a one-day sale, and this Bissell TurboClean deal is too good to skip
A woman using the Bissell TurboClean Cordless Hard Floor Cleaner Mop and Lightweight Wet/Dry Vacuum.

The best smartwatch you've never heard of is on sale for less than $50
Nothing CMF Watch 3 Pro in light green with blue and green abstract background

Reddit r/all takes another step into the grave
Reddit logo on phone screen

Take back your screen from ads and trackers with this $16 tool
AdGuard Family Plan: Lifetime Subscription

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


NYT Connections hints today: Clues, answers for April 2, 2026
Connections game on a smartphone

NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!