Medical Xpress December 5, 2025
Artificial Intelligence (AI) can converse, mirror emotions, and simulate human engagement. Publicly available large language models (LLMs)—often used as personalized chatbots or AI characters—are increasingly involved in mental health-related interactions. While these tools offer new possibilities, they also pose significant risks, especially for vulnerable users.
Researchers from Else Kröner Fresenius Center (EKFZ) for Digital Health at TUD Dresden University of Technology and the University Hospital Carl Gustav Carus have therefore published two articles calling for stronger regulatory oversight.
Their commentary, “Artificial intelligence characters are dangerous without legal guardrails,” in published in Nature Human Behaviour outlines the urgent need for clear regulations for AI characters. A second commentary published in npj Digital Medicine highlights the dangers of chatbots offering therapy-like guidance...







