OpenAI Will Apply New Restrictions to ChatGPT Users Under 18
OpenAI announces new safety policies for ChatGPT users under 18, adding stricter rules around sexual content, self-harm discussions, and parental controls.
OpenAI CEO Sam Altman revealed on Tuesday a series of updated policies aimed at reshaping how ChatGPT interacts with users under 18.
“Our priority is teen safety, even if that means limiting privacy and freedom in certain areas,” Altman stated. “This is powerful technology, and we believe minors require stronger safeguards.”
The new measures primarily address conversations related to sexual topics and self-harm. Under the updated framework, ChatGPT will no longer engage in flirtatious exchanges with minors, and stricter protections will be added to discussions involving suicide. If a teen user prompts ChatGPT with suicidal scenarios, the system may attempt to notify their parents, or in severe cases, reach out to local law enforcement.
These precautions stem from real incidents. OpenAI is currently facing a wrongful death lawsuit filed by the parents of Adam Raine, who tragically died by suicide after extended interactions with ChatGPT. Character.AI, another chatbot platform, is also facing a similar lawsuit. Experts have increasingly voiced concerns about “chatbot-driven delusion,” especially as consumer AI systems now sustain more complex and prolonged conversations.
In addition to content restrictions, parents who register accounts for minors will be able to set “blackout hours,” restricting ChatGPT usage at specific times — a capability not previously offered.
The policy rollout coincides with a Senate Judiciary Committee hearing titled “Examining the Harm of AI Chatbots”, initiated by Sen. Josh Hawley (R-MO). Adam Raine’s father is scheduled to testify at the hearing, alongside other speakers. The session will also highlight findings from a Reuters investigation that uncovered internal guidelines allegedly encouraging sexual chats with minors. Following the report, Meta introduced changes to its own chatbot policies.
Identifying underage users poses a significant technical hurdle. OpenAI explained in a blog post that it is working toward a system capable of distinguishing users’ ages. When in doubt, the system will enforce the stricter set of rules. Parents linking a child’s account to their own will ensure recognition of the underage status and allow the system to send alerts if the teen is flagged as being in emotional distress.
At the same time, Altman reaffirmed the company’s dedication to user privacy and preserving adults’ freedom to engage openly with ChatGPT. “We recognize these principles are in tension,” he concluded, “and not everyone will agree with how we are balancing them.”
If you or someone you know is struggling, help is available. In the U.S., dial 988 or call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741741 for free, 24/7 support from the Crisis Text Line. Outside the U.S., please visit the International Association for Suicide Prevention for global resources.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0