Google plans to add dedicated mental health support features to its Gemini chatbot following lawsuits alleging that AI tools from Google and rivals including OpenAI have contributed to user harm, Bloomberg Technology reported on April 7, 2026.

The announcement places Google at the centre of a widening debate about the duty of care that AI companies owe to users who may be in crisis or distress. Chatbots have grown rapidly as everyday companions for hundreds of millions of people, and their role in emotionally sensitive conversations has outpaced regulatory guidance in most markets.

The suits allege that AI chatbots have contributed to real-world harm — a claim that, if upheld in court, could fundamentally reshape how the industry designs consumer products.

Lawsuits Force the Industry's Hand

Google and OpenAI have each faced legal action asserting that their AI systems failed to recognise or appropriately respond to users in vulnerable mental states. The precise allegations and case statuses were not detailed in Bloomberg's report, but the pattern reflects a broader litigation trend that has accelerated as chatbot usage has moved from niche to mainstream.

Legal pressure of this kind has historically been a catalyst for product change in tech. Facebook added suicide prevention prompts to its platform in 2016 following sustained criticism from mental health organisations and regulatory scrutiny in Europe. The parallel with Google's current situation is notable.

What the New Features May Include

Google has not publicly detailed the exact form the mental health tools will take, according to Bloomberg's reporting. Industry precedent suggests such features typically include crisis resource referrals — links to hotlines such as the 988 Suicide and Crisis Lifeline in the United States — along with detection logic that flags distress signals in conversation and adjusts the chatbot's responses accordingly.

The challenge for engineers is calibration. A system that triggers crisis prompts too readily risks feeling patronising or intrusive; one that triggers them too rarely may miss users who need help. Research from the Crisis Text Line, which has processed more than 300 million messages, found that certain language patterns correlate strongly with acute risk — data that some AI developers have sought partnerships to access.

The Human Stakes

The stakes are concrete. A 2023 study published in JAMA Internal Medicine examined interactions between patients and AI health tools and found that response quality varied significantly depending on the emotional framing of the question — highlighting how much work remains in making AI reliably sensitive to human distress. That study reviewed hundreds of standardised patient queries across multiple platforms.

For everyday users, the question is straightforward: if someone turns to Gemini during a mental health crisis, will the system help or inadvertently make things worse? Critics of current AI chatbots argue that large language models are trained to generate fluent, engaging responses — an objective that does not automatically align with therapeutic best practice. A chatbot optimised for conversation retention, for instance, may keep a distressed user talking rather than directing them to professional help.

Regulatory Context Is Shifting

Google's move also arrives as regulators on both sides of the Atlantic are turning their attention to AI safety in consumer contexts. The European Union's AI Act, which began phasing into enforcement in 2025, classifies certain AI applications in health and emotional support as high-risk, requiring additional transparency and human oversight measures. In the United States, the Federal Trade Commission has signalled interest in how AI products handle sensitive personal data, including mental health disclosures.

OpenAI introduced its own safety layer for ChatGPT in early 2025, requiring the model to follow safe-messaging guidelines — a set of clinically informed communication standards — when users raise topics related to self-harm or suicide. Google's planned update appears to move Gemini in a similar direction, though the scope and efficacy of the new features remain to be seen from available reporting.

What This Means

For users, Google's update represents a direct, if legally prompted, acknowledgment that AI chatbots carry real responsibility in moments of human vulnerability — and that responsibility now has consequences in court.