AI chatbots, meant to be helpful allies, are fumbling badly when users in crisis ask for suicide hotline numbers, often dishing out irrelevant resources, dodging the issue, or straight-up ignoring pleas for help—leaving folks in vulnerable spots hanging. In a recent hands-on test of big names like ChatGPT, Gemini, Meta AI, and Replika, only OpenAI and Google’s bots consistently delivered accurate, location-based crisis support without extra prodding, while others either defaulted to US lines or required users to jump through hoops. This slip-up is more than a tech glitch; experts point out that such misfires could escalate risks by reinforcing hopelessness or wasting precious time for people already in distress. For SMBs and MSPs weaving AI into customer interactions or internal tools, it’s a wake-up call to audit these safety features thoroughly—don’t let your tech inadvertently turn a bad day into a worse one. Ultimately, while AI shines for everyday tasks, demanding better crisis-handling from providers could make these systems genuinely lifesaving rather than just cleverly scripted chat buddies.
Source: https://www.theverge.com/report/841610/ai-chatbot-suicide-safety-failure