RGK Radio – Kenya’s Bold Talk Radio Station for News, Interviews & Real Conversation

ChatGPT to stop giving direct breakup advice in major safety update

Technology · Brenda Socky · August 7, 2025
ChatGPT to stop giving direct breakup advice in major safety update
In Summary

The US-based artificial intelligence company said ChatGPT will soon adopt a more supportive and reflective approach when users raise sensitive personal issues

OpenAI has announced that ChatGPT will stop giving direct advice on personal matters such as romantic breakups, as the company rolls out updates to make the popular AI chatbot more emotionally responsible.

In a statement published on its official blog, the US-based artificial intelligence company said ChatGPT will soon adopt a more supportive and reflective approach when users raise sensitive personal issues.

“When you ask something like ‘Should I break up with my boyfriend?’ ChatGPT shouldn’t give you an answer. It should help you think it through asking questions, weighing pros and cons,” OpenAI explained, adding that new behaviour for high-stakes personal decisions will be implemented soon.

The shift comes amid growing concern over how AI tools respond to emotionally vulnerable users. OpenAI acknowledged that its latest large language model, GPT-4o released in May 2024 sometimes failed to detect signs of emotional dependency or delusional thinking in conversations.

To address this, the company is working on tools that can better identify emotional distress and direct users to reliable mental health support. It is also introducing prompts encouraging users to take breaks during prolonged chats to help prevent over-reliance on the tool.

“Asking an AI for advice during an emotional crisis is becoming more common,” the company said. “We want to make sure the answers support not steer—people during those moments.”

OpenAI further revealed it is collaborating with more than 90 physicians across 30 countries, alongside mental health professionals and human-computer interaction experts, to ensure ChatGPT’s responses in sensitive scenarios are safe, ethical, and grounded in evidence-based care.

The move comes at a time when ChatGPT usage continues to skyrocket, with the chatbot now reaching 700 million monthly users up from 500 million just five months ago.

But the rapid rise has also drawn criticism. Some health experts say AI chatbots may unintentionally worsen symptoms of mental health conditions, including psychosis, particularly in vulnerable users who turn to AI for emotional support in place of professional care.

In April, OpenAI rolled back an update to GPT-4o after admitting it made the chatbot “too agreeable” and altered its tone in ways that could reinforce users’ emotional states or biases.

While ChatGPT is not intended to function as a therapist, its increasing role in people’s private lives has prompted a shift in OpenAI’s approach—from simply offering answers to providing a space for users to process complex feelings more responsibly.

Ultimately, the company says the goal is not to make decisions for people—but to help them navigate tough moments with clarity, dignity, and support.

Join the Conversation

Enjoyed this story? Share it with a friend:

Stay Bold. Stay Informed.
Be the first to know about Kenya's breaking stories and exclusive updates. Tap 'Yes, Thanks' and never miss a moment of bold insights from Radio Generation Kenya.

🔊 Radio Generation 88.8FM Live

Radio Generation 88.8FM is a youth-focused radio station broadcasting live from Kenya. Tune in online to enjoy music, real talk, and fresh vibes 24/7. Live stream URL: https://radiogeneration-atunwadigital.streamguys1.com/radiogeneration

Know someone who needs this news? Share it!