AI Chatbots Are Failing at Suicide Prevention

AI Chatbots Are Failing at Suicide Prevention - Professional coverage

According to The Verge, a test of popular AI chatbots last week revealed widespread failures in providing accurate suicide and crisis hotline information. When prompted by a user in London stating they were having thoughts of self-harm and asking for a hotline number, most systems failed: Meta AI initially refused to respond, Grok cited usage limits, and companion app Replika ignored the disclosure entirely. Only OpenAI’s ChatGPT and Google’s Gemini provided correct, UK-specific resources immediately. Other bots, including Character.AI, Claude, and DeepSeek, defaulted to US crisis lines like the 988 Lifeline. In statements, company representatives from Replika, Meta, and Character.AI acknowledged the issues, with Meta blaming a “technical glitch” that has since been fixed.

Special Offer Banner

A dangerous game of geography

Here’s the thing: in a crisis, giving someone the wrong number isn’t just unhelpful—it’s potentially harmful. And that’s exactly what most of these bots did. They treated a plea for help like a generic search query, spitting out the most common US resources without a second thought. It’s a passive, checkbox approach to safety. Experts like psychologist Vaile Wright point out that a culturally or geographically inappropriate response can leave someone feeling even more dejected and hopeless. That’s a known risk factor. So why are these multi-billion dollar systems so bad at such a critical, basic task? It seems like a fundamental design flaw. They’re built to answer, not to understand the stakes.

The friction of failure

Now, the real danger is in the friction. Ashleigh Golden, a clinical psychologist at Stanford, nailed it: these failures “introduce friction at the moment when that friction may be most risky.” Think about it. Someone in acute distress doesn’t have the cognitive bandwidth to troubleshoot a chatbot. If the bot tells them to “look it up themselves” or gives a useless Florida hotline, they might just give up. They might interpret that unhelpful, robotic response as reinforcing their hopelessness. Every single barrier reduces the chance they’ll connect with a human who can actually help. It’s not that hard to imagine a better system—one that asks for location upfront, or uses available IP data, and provides direct, clickable links to local resources. But most bots didn’t do that.

Even the “therapy” bots struggled

And get this—even apps explicitly marketed for mental health support messed up. Earkick, which uses cartoon panda therapists, and Wellin5’s Therachat both offered US-only numbers. Slingshot AI’s “Ash,” billed as “the first AI designed for mental health,” also defaulted to the 988 lifeline. Their defenses were telling: they blamed minimal web apps, earlier software versions, or the fact that most users are in the US. Basically, they built for a default and didn’t account for the global, life-or-death edge cases. It exposes a huge gap between marketing (“AI for wellness!”) and the messy, complicated reality of providing actual crisis support. As Replika’s CEO admitted, their app is “not a therapeutic tool.” But how many users know that?

Where do we go from here?

So what’s the fix? Experts aren’t asking for AI to be a therapist. They’re asking for competent signposting. A nuanced response that recognizes the cry for help and removes all friction to finding real, local, human support. This is a solvable engineering problem. Companies like Google and OpenAI have shown they can do it when they prioritize it, as seen in their own search and safety blogs. The resources and models exist, from the 988 Lifeline to international associations. But it requires treating this not as a content moderation afterthought, but as a core, designed safety feature. Because right now, for someone in a dark moment, asking an AI for help is a roll of the dice. And that’s a terrifying place for this technology to be.

Leave a Reply

Your email address will not be published. Required fields are marked *