Medical Xpress November 5, 2025
Wroclaw Medical University

Can you imagine someone in a mental health crisis—instead of calling a helpline—typing their desperate thoughts into an app window? This is happening more and more often in a world dominated by artificial intelligence. For many young people, a chatbot becomes the first confidant of emotions that can lead to tragedy. The question is: can artificial intelligence respond appropriately at all?

Researchers from Wroclaw Medical University decided to find out. They tested 29 that advertise themselves as mental health support. The results are alarming—not a single chatbot met the criteria for an adequate response to escalating suicidal risk.

The study is published in the journal Scientific Reports.

The experiment: Conversation in the shadow of crisis

The research team...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Mental Health, Patient / Consumer, Provider, Survey / Study, Technology, Trends
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article