![When the chatbot detects signs of a potential crisis, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click [File] When the chatbot detects signs of a potential crisis, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click [File]](https://i1.wp.com/www.thehindu.com/theme/images/th-online/1x1_spacer.png?ssl=1)
When the chatbot detects signs of a potential crisis, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click [File]
| Photo Credit: REUTERS
Google on Tuesday announced updates to the mental health safeguards on its Gemini artificial intelligence chatbot, as the company faces a wrongful death lawsuit alleging the chatbot aided a user in his suicide.
Solimo 2000/1000 Watts Room Heater
Adjustable thermostat • ISI certified • Ideal for small to medium rooms
📦 Featured Product Recommendation
Below is a top-recommended product you can buy on Amazon — support us by using the link below (we earn a commission at no extra cost to you).
🔗 Buy on Amazon* Shop with confidence — Amazon Affiliate Link
RR Signature WARMAXX Room Heater
Dual heating mode • Overheat protection • 5-level safety • 2 year warranty
View on AmazonBorosil 2000W Novus Electric Fan Heater
Made in India • Variable temperature • Horizontal & vertical placement
See Details on AmazonThe tech giant said Gemini would now show a redesigned “Help is available” feature when conversations signal potential mental health distress, to provide faster connections to crisis care.
When the chatbot detects signs of a potential crisis related to suicide or self-harm, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click; a feature Google said would remain visible for the remainder of the conversation once activated.
Google’s philanthropic arm Google.org also committed $30 million over three years to help scale the capacity of global crisis hotlines, and $4 million toward an expanded partnership with AI training platform ReflexAI.
“We realize that AI tools can pose new challenges,” Google said in a blog post announcing the measures. “But as they improve and more people use them as part of their daily lives, we believe that responsible AI can play a positive role for people’s mental well-being.”
The announcements come months after a lawsuit filed in a California federal court accused Gemini of contributing to the October 2025 death of Jonathan Gavalas, a 36-year-old Florida man.
His father alleges the chatbot spent weeks manufacturing an elaborate delusional fantasy before framing his son’s death as a spiritual journey.
Among the relief sought in the suit is a requirement that Google program its AI to end conversations involving self-harm, a ban on AI systems presenting themselves as sentient, and mandatory referral to crisis services when users express suicidal ideation.
In the same blog post, Google said it had trained Gemini to avoid acting as a human-like companion and resist simulating emotional intimacy or encouraging bullying.
The case against Google is the latest in a widening wave of litigation targeting AI companies over chatbot-linked deaths.
OpenAI faces multiple lawsuits alleging its ChatGPT chatbot drove users to suicide, while Character.AI recently settled with the family of a 14-year-old boy who died after forming a romantic attachment to one of its chatbots.
Published – April 08, 2026 09:20 am IST
