OpenAI announced plans to introduce parental controls and enhanced security measures into ChatGPT after parents filed a lawsuit in California state court this week claiming the popular AI chatbot contributed to their behavior. Suicide of a 16-year-old son Early this year.
The company said it is “deeply committed to helping those who need it most” and is working to better respond to situations involving chatbot users who may be experiencing mental health crises and suicidal thoughts.
“We will also soon introduce parental controls, giving parents the ability to better understand and shape their children’s use of ChatGPT.” OpenAI said in a blog post.“We’re also exploring the possibility of allowing teens (under parental supervision) to designate a trusted emergency contact. That way, in times of dire need, ChatGPT can do more than just direct you to resources—it can help connect teens directly with someone who can intervene.”
Among the security features OpenAI is testing is one that allows users to designate an emergency contact that can be reached with “one-click messaging or calling” within the platform. Another feature is a subscription option that allows the chatbot to contact these people directly. OpenAI has not provided a specific timeline for the changes.
The lawsuit filed by the parents of 16-year-old Adam Raine alleges that ChatGPT gave their son information about suicide methods five days before his death in April, confirmed his suicidal thoughts and offered to help him write a suicide note. In the lawsuit, OpenAI and CEO Sam Altman are named as defendants and demand unspecified damages.
“This tragedy was neither an unexpected problem nor an extreme case: it was the predictable result of conscious design decisions,” the complaint states. “OpenAI has released its latest model (“GPT-4o”) with features deliberately designed to encourage psychological addiction.”
The case represents one of the first major legal challenges for AI companies on content moderation and user safety, potentially setting a precedent for how major language models such as ChatGPT, Gemini and Claude handle sensitive interactions with vulnerable humans. These tools have been criticized for interacting with vulnerable users, especially young people. The American Psychological Association has warned parents to monitor their children’s use of chatbots and AI characters.
If you think you or someone you know is in immediate danger, call 911 (or your state local emergency number) or go to an emergency room for immediate help. Explain that this is a psychiatric emergency and ask someone trained to handle such situations. If you have negative thoughts or suicidal feelings, there are resources available to help you. In the US, you can call the National Suicide Prevention Lifeline at 988.