OpenAI has faced enormous pressure in recent months to address concerns that its flagship product, ChatGPT, is unsafe for teens.
The AI chatbot is at the heart of multiple wrongful death lawsuits alleging that it coached teens to take their own lives or didn’t appropriately respond to their suicidal feelings. A public service announcement recently depicted some of these exchanges, imagining the chatbots as creepy humans that harm kids. OpenAI has denied the allegations in one case — the suicide death of 16-year-old Adam Raine.
On Thursday, OpenAI published a blog post on its escalating safety efforts and committed “to put teen safety first, even when it may conflict with other goals.”
The post introduced an update to its Model Spec, which guides how its AI models should behave. A new set of principles for under-18 users will particularly inform how the models react in high-stakes situations.
OpenAI said that ChatGPT update should provide a “safe, age-appropriate experience” for users between the ages of 13 and 17 by prioritizing prevention, transparency, and early intervention.
“This means teens should encounter stronger guardrails, safer alternatives, and encouragement to seek trusted offline support when conversations move into higher-risk territory,” the post said. ChatGPT is designed to urge teens to contact emergency services or crisis resources when demonstrating imminent risk.
Mashable Light Speed
When users sign in as under-18, safeguards should make ChatGPT take extra care when discussing topics like self-harm, suicide, romantic or sexualized role play, or keeping secrets about dangerous behavior, according to the company.
The American Psychological Association provided OpenAI with feedback on an early draft of the under-18 principles, according to the post.
“Children and adolescents might benefit from AI tools if they are balanced with human interactions that science shows are critical for social, psychological, behavioral, and even biological development,” Dr. Arthur C. Evans Jr., CEO of the American Psychological Association, said in the post.
OpenAI is also offering teens and parents two new expert-vetted AI literacy guides. The company said it’s in the early stages of implementing an age-prediction model for users with ChatGPT consumer plans.
Child safety and mental health experts recently declared AI chatbots as unsafe for teen discussions about their mental health. Last week, OpenAI announced that its latest model, ChatGPT-5.2, is “safer” for mental health.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.




