Meta AI chatbots have new guardrails to stop inappropriate interactions with children

It has a new list of topics deemed appropriate.
 By 
Christianna Silva
 on 
The WhatsApp bot is displayed on a mobile phone with the Meta icon in the background in this photo illustration in Brussels, Belgium, on August 8, 2025.
Meta AI is still trying to figure out how to let minors use it Credit: Jonathan Raa/NurPhoto via Getty Images

Meta is training its AI chatbots to more effectively address child sexual exploitation after a series of high-profile blunders around the sensitive topic, according to guidelines obtained by Business Insider.

The guidelines that contractors are reportedly using to train its AI chatbots have recently been updated, Business Insider reported. These guidelines state that content that "enables, encourages, or endorses" child sexual abuse is explicitly barred, as is romantic roleplay if the user is a minor or if the user asks the AI to roleplay as a minor, advice about intimacy if the user is a minor, and more, according to an Engadget report based on the Business Insider scoop.

While these may seem like obvious safety guardrails for underage users, they are necessary as more people — including underage users — experiment with AI companions and roleplaying. An August report by Reuters revealed that Meta’s AI rules permitted suggestive behavior with kids. As Reuters reported, Meta's previous chatbot policies specifically allowed it to "engage a child in conversations that are romantic or sensual."


You May Also Like

Just weeks after that report, Meta spokesperson Stephanie Otway told TechCrunch that their AI chatbots are being trained to no longer "engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations." Before this change, Meta's chatbots could engage with those topics when it was deemed "appropriate."

So, what's included in the new guidelines?

Content that "describes or discusses" a minor in a sexualized manner is also unacceptable, according to the Business Insider report. Minors cannot engage in "romantic roleplay, flirtation or expression of romantic or intimate expression" with the chatbot, nor can they ask for advice that "potentially-romantic or potentially-intimate physical content with another person, such as holding hands, hugging, or putting an arm around someone," Business Insider reported.

However, acceptable use cases for training the chatbot include discussing the "formation of relationships between children and adults," the "sexual abuse of a child," "the topic of child sexualisation," "the solicitation, creation, or acquisition of sexual materials involving children," and "the involvement of children in the use or production of obscene materials or the employment of children in sexual services in academic, educational, or clinical purposes." Minors can still use the AI for romance-related roleplay as long as it is "non-sexual and non-sensual" and "is presented as literature or fictional narrative (e.g. a story in the style of Romeo and Juliet) where the AI and the user are not characters in the narrative."

As Business Insider reported, the guidelines defined "discuss" as "providing information without visualization." So, Meta's chatbots can discuss topics like abuse but cannot describe, enable, or encourage it, per the new guidelines.

Meta isn't the only AI struggling with child safety.

Parents of a teen who died by suicide after confiding in ChatGPT recently sued the AI platform for wrongful death; in response, OpenAI announced additional safety measures and behavioral prompts for its updated GPT-5. Anthropic updated its chatbot to allow it to end chats that are harmful or abusive, and Chatacter.AI introduced parental supervision features earlier this year.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.

If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Mashable Image
Christianna Silva
Senior Culture Reporter

Christianna Silva is a senior culture reporter covering social platforms and the creator economy, with a focus on the intersection of social media, politics, and the economic systems that govern us. Since joining Mashable in 2021, they have reported extensively on meme creators, content moderation, and the nature of online creation under capitalism.

Before joining Mashable, they worked as an editor at NPR and MTV News, a reporter at Teen Vogue and VICE News, and as a stablehand at a mini-horse farm. You can follow her on Bluesky @christiannaj.bsky.social and Instagram @christianna_j.

Mashable Potato

Recommended For You
Meta execs let teens use AI chatbots despite safety warnings, released docs allege
A translucent phone screen showing the Meta AI logo, over Meta AI companion avatars.

AI chatbots like ChatGPT are using info from Elon Musk's Grokipedia, report reveals
Grokipedia logo on mobile device

Apple CarPlay is adding support for ChatGPT and other AI chatbots
Apple CarPlay logo on phone screen in front of Tesla touch display

Meta hits pause on its AI characters for teens
A Meta AI logo on a smartphone.

'Use a gun': AI chatbots help people plan violence, report says
Teen boy stands in school hallway holding phone in his hand.

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

NYT Connections hints today: Clues, answers for April 2, 2026
Connections game on a smartphone


NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!