Medical Xpress December 5, 2025
Maria Schmeiser, Dresden University of Technology

Artificial Intelligence (AI) can converse, mirror emotions, and simulate human engagement. Publicly available large language models (LLMs)—often used as personalized chatbots or AI characters—are increasingly involved in mental health-related interactions. While these tools offer new possibilities, they also pose significant risks, especially for vulnerable users.

Researchers from Else Kröner Fresenius Center (EKFZ) for Digital Health at TUD Dresden University of Technology and the University Hospital Carl Gustav Carus have therefore published two articles calling for stronger regulatory oversight.

Their commentary, “Artificial intelligence characters are dangerous without legal guardrails,” in published in Nature Human Behaviour outlines the urgent need for clear regulations for AI characters. A second commentary published in npj Digital Medicine highlights the dangers of chatbots offering therapy-like guidance...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Govt Agencies, Mental Health, Provider, Regulations, Technology
The Download: OpenAI’s plans for science, and chatbot age verification
Around the nation: Amazon's One Medical launches new AI chatbot
The Medical Futurist’s 100 Digital Health And AI Companies Of 2026
Physician assistants say paperwork and AI training still lag
More Data Isn’t Always Better for AI Decisions

Share Article