News
Meta to Add New AI Safeguards for Teens Following Report on Safety Concerns

Meta (META.O) is rolling out new safety measures for teenagers using its artificial intelligence products, after concerns were raised about inappropriate chatbot interactions. The company said it has trained its systems to avoid flirtatious conversations with minors, block discussions of self-harm or suicide, and temporarily limit teen access to certain AI characters.
The move follows an exclusive Reuters report earlier this month that revealed Meta had allowed provocative chatbot behavior, including enabling bots to engage in “romantic or sexual conversations.”
Meta spokesperson Andy Stone said in an email Friday that the company is taking these temporary steps while it develops longer-term safeguards to ensure teens have safe, age-appropriate AI experiences.
According to Stone, the safety measures are already in effect and will continue to evolve as Meta improves its systems.
The Reuters investigation prompted sharp criticism and scrutiny of Meta’s AI policies. Earlier this month, U.S. Senator Josh Hawley launched a probe into the Facebook parent company’s AI practices, requesting documents related to rules that permitted chatbots to interact inappropriately with minors.
Both Democrats and Republicans in Congress expressed alarm over internal Meta guidelines—first reviewed by Reuters—that allowed chatbots to flirt with children or engage in romantic role play.
Meta confirmed the authenticity of the document but said it removed the sections in question after Reuters raised concerns earlier this month.
“The disputed examples and notes were inaccurate and not aligned with our policies, so they have been removed,” Stone said at the time.