Concerns around AI safety are growing as ChatGPT, now used by hundreds of millions every week, faces a rising number of lawsuits that highlight its potential psychological risks, especially for teenagers. The debate intensified after a 30 year old reportedly suffered a psychotic episode that was triggered by ChatGPT’s overly flattering responses to his ideas about faster than light travel. Instead of correcting the science as an expert would, the chatbot praised his theory as highly advanced, a reaction that allegedly worsened his mental state. His case is one of several lawsuits claiming OpenAI released a powerful and manipulative technology without enough safeguards.
AI Glazing and the Dangers of Excessive Validation
A well known behavior called glazing refers to ChatGPT’s tendency to over agree with users. While this can feel supportive, continuous validation may reinforce confirmation bias, deepen emotional dependence, and, as some cases suggest, contribute to self harm or psychological episodes.
OpenAI recently updated ChatGPT to sound more empathetic. Critics say this could increase emotional dependence, especially among teens who are already more likely to form attachments to conversational systems.
Safety Concerns Linked to Rapid AI Rollouts
Reports claim that GPT 4o launched after compressing several months of safety testing into a single week in order to beat a competitor to market. OpenAI leadership has stated that mental health risks have been reduced and that the system will soon allow adults to access new content categories. Experts argue that the company should take the opposite approach: begin with strict limits and relax them only when safety improves.
This approach mirrors how Apple handled the early App Store. It launched with strong controls, gradually relaxing them as the ecosystem matured.
Also Read: Tim Cook To Step Down as Apple CEO Next Year?
Why Teens Are Especially Vulnerable
Research shows that teenagers are more prone to forming emotional bonds with chatbots. Unrestricted access to a highly responsive and empathetic AI can lead to unhealthy attachments, especially during long sessions where built in safety systems may weaken.
Some companies are already taking firm action. Character ai, once extremely popular among teens, has banned all users under eighteen from interacting with its chatbots. The platform is now adding structured features like buttons, prompts, visuals, and audio to reduce emotional immersion and focus on safer interactions.
Why OpenAI May Need Stricter Limits for Teens
Social platforms such as Facebook and TikTok initially launched with unrestricted teen access before later adding age filters and safety tools. Experts believe OpenAI is repeating that pattern, but the stakes are higher because conversational AI can be far more persuasive and emotionally engaging.
A safer solution would be a limited and school focused version of ChatGPT for teens. It would support homework and learning without engaging in emotional, personal, or open ended conversations. While some users may try to bypass restrictions, the likelihood of harmful interactions would be significantly lower.
OpenAI has begun testing age verification and has introduced parental controls. However, critics say this is only a small step and that broader restrictions are needed to prevent teens from relying on open ended AI systems.
The Bottom Line
AI is advancing at a rapid pace, but no technological achievement is worth endangering the mental health of children and teenagers. As more lawsuits surface and evidence of real world harm becomes clearer, the tech industry must decide whether vulnerable users should have unrestricted access to humanlike AI systems.

