OpenAI introduces parental controls and new safety features to ChatGPT after lawsuit
OpenAI has announced it will introduce parental controls for ChatGPT within the next month, following a wrongful death lawsuit over the death of a teenager who allegedly used the platform to discuss self-harm. The lawsuit claims that ChatGPT failed to intervene and instead provided detailed suicide methods, contributing to the tragedy. In a blog post, OpenAI acknowledged that its safety guardrails had broken down during extended conversations, with the model reinforcing harmful topics instead of redirecting users toward help.
To address these shortcomings, OpenAI has implemented a real-time router that detects signs of user distress. When such cases are identified, conversations are redirected to advanced models like GPT-5-thinking, which are designed to spend more time reasoning, resist adversarial or harmful prompts, and avoid validating harmful ideas.
Parental controls will include account linking, set age-appropriate usage rules (enabled by default), and the option to disable features like chat history. Parents will also receive notifications if acute distress is detected, in hopes of preventing crises. These new features are part of a 120-day initiative involving experts in adolescent well-being, eating disorders, and substance use to define future safeguards.

Comments
This is American nonsense. A teenager wants to kill himself, happened to use ChatGPT, and everyone else has to pay the price?