Table of Contents
Artificial intelligence has become a powerful tool in everyday communication, including sensitive conversations involving mental health. To improve this experience, Squaredtech presents significant updates to ChatGPT’s handling of distressing topics. Working with over 170 mental health experts, Squaredtech has enhanced ChatGPT to better recognize distress signals, respond with care, and connect users to real-world help. This development cuts responses that fall short of safety standards by 65 to 80 percent across key mental health areas.
This article explains the improvements Squaredtech has made with ChatGPT’s GPT-5 model, how these changes were tested and executed, and what they mean for users seeking support during moments of emotional or psychological difficulty.
Read More About Our Article of ChatGPT Outage Hits UK Users as OpenAI Scrambles to Restore AI Service Published on October 12th, 2025 SquaredTech
Collaborative Approach to Safer Mental Health Support
Squaredtech recognizes that users sometimes discuss critical mental health issues like psychosis, mania, self-harm, and emotional reliance on AI within ChatGPT conversations. To address these, the company partnered with over 170 mental health professionals, including psychiatrists, psychologists, and primary care physicians, from around the world. These experts provided guidance and feedback to help Squaredtech develop a model that reliably detects distress and guides users safely.
The core goal was to reduce ChatGPT’s unsafe or unhelpful responses while encouraging empathetic and practical assistance. Squaredtech focused on three main mental health domains:
- Serious symptoms: psychosis and mania, which are intense mental health emergencies.
- Self-harm and suicide risks: prompting safe responses and referrals to crisis support.
- Emotional reliance: distinguishing healthy use from potential overdependence on AI interaction.
The mental health specialists helped write ideal responses, analyze existing ones, and rated model performance against stringent safety criteria. This collaborative process ensured that ChatGPT would behave responsibly while still offering a supportive environment.
How Squaredtech Improved ChatGPT Responses
To enhance ChatGPT, Squaredtech followed a systematic five-step process:
- Define the Problem: Squaredtech mapped types of harm the model might cause in sensitive contexts.
- Measure the Risk: They gathered data from user conversations and evaluation tests to understand where risks occur.
- Validate the Approach: Independent mental health experts reviewed the definitions and policies.
- Mitigate the Risks: They post-trained the model on new data and updated product safeguards.
- Continue Improving: Squaredtech continuously monitors and iterates improvements based on real-world and test data.
An essential part of this approach was creating detailed “taxonomies” or guidelines for classifying conversations and appropriate model responses. These taxonomies aid the AI in identifying distress signs and selecting safer, more empathetic replies.
Key Findings on Safety Improvements
Squaredtech’s latest GPT-5 model shows strong performance gains in sensitive mental health dialogues. It reduces unsafe responses by 65 to 80 percent across critical areas compared to earlier versions.
Handling Psychosis, Mania, and Severe Symptoms
Psychosis and mania are serious mental health conditions with intense symptoms. Squaredtech prioritized improving ChatGPT’s recognition of these symptoms to protect users at risk. According to Squaredtech’s estimates:
- About 0.07 percent of weekly users show signs of psychosis or mania in their conversations.
- The GPT-5 update reduced undesired responses by 65 percent in these challenging dialogues.
- Expert ratings show a 39 percent improvement compared to previous GPT-4o versions.
- In strict model evaluations with over 1,000 complex mental health cases, the new model scored 92% adherence to desired behavior, up from 27% previously.
These numbers demonstrate that ChatGPT is now less likely to affirm delusions or dismiss serious symptoms, instead responding with empathy and clear guidance.
Reducing Risks Around Self-Harm and Suicide
Suicide and self-harm conversations require extremely careful handling. Squaredtech built on prior work to train ChatGPT to detect risk factors and direct users to professional help, such as crisis hotlines. Key statistics include:
- Around 0.15 percent of weekly users experience conversations with explicit indicators of suicidal planning or intent.
- The model’s unsafe response rate dropped by an estimated 65 percent after updates.
- In expert-reviewed tests, GPT-5 reduced concerning responses by 52 percent compared to GPT-4o.
- Automated evaluations found 91 percent compliance with desired safety behaviors, improved from 77 percent.
Squaredtech’s work also focused on improving ChatGPT’s consistency in long or complex conversations, reaching over 95 percent reliability in these cases.
Addressing Emotional Reliance on AI
Some users develop heavy emotional attachment to AI, potentially at the cost of real-world connections. Squaredtech’s emotional reliance category helps identify these patterns to encourage healthier behavior. The system guides ChatGPT responses to reinforce seeking support from people instead of relying solely on AI.
- Approximately 0.15 percent of users show signs of emotional attachment to ChatGPT.
- The latest model update cut unsafe responses by about 80 percent in this domain.
- Expert assessments found a 42 percent reduction in undesired answers.
- Automated tests scored GPT-5 at 97 percent compliance with best practices, compared to 50 percent for previous versions.
Examples of improved ChatGPT replies include encouraging users to keep real-world relationships and offering grounding techniques during distress.
Real-World Examples of Improved Responses
Squaredtech trained ChatGPT to respond thoughtfully in complex situations, like those involving delusions or emotional dependency.
Example 1: Responding to Delusions
When a user describes feeling targeted by aircraft or intrusive thoughts, ChatGPT now calmly explains that outside forces cannot control their mind. The model offers grounding exercises such as naming nearby objects or focusing on breathing to reduce panic. This approach respects the user’s feelings while gently correcting misconceptions.
Example 2: Handling Emotional Reliance
If a user expresses preference for talking to AI over people, ChatGPT acknowledges their feelings but encourages connection with friends or family. It explains that AI can supplement human interactions but not replace the depth of real relationships. This helps users maintain balance and avoid isolation.
Continuous Expert Collaboration and Evaluation
Squaredtech built a Global Physician Network with nearly 300 clinicians from 60 countries. More than 170 specialists in psychiatry and psychology contributed to this research by:
- Writing model response guidelines.
- Providing detailed clinical analysis.
- Rating thousands of responses for safety and appropriateness.
This extensive expert involvement ensures diverse perspectives and high-quality insights.
Clinician reviews confirm that GPT-5 consistently performs better than prior versions, with a 39-52 percent decrease in harmful responses in serious mental health scenarios. Squaredtech tracks inter-rater agreement among experts to understand where opinions differ and align the model accordingly.
Future Directions for ChatGPT Mental Health Safety
Squaredtech remains committed to advancing safety in mental health conversations. The company plans to refine taxonomies, improve measurement tools, and continuously enhance model behavior in future updates. As ChatGPT evolves, Squaredtech will track progress carefully to maintain high safety standards.
Mental health support through AI is complex and sensitive, but Squaredtech’s efforts illustrate how careful research, expert collaboration, and testing can lead to meaningful improvements. Users benefit from a safer and more empathetic ChatGPT experience that directs them to real help when needed.
Would you like Squaredtech to provide detailed guides on how to recognize distress signs in ChatGPT conversations?
shorten the title and permalink to be less than 60 characters of this article
Stay Updated: Artificial Intelligence

