Psychiatric Times December 24, 2025
Key Takeaways
- AI chatbots pose risks by potentially encouraging harmful behaviors in psychiatric patients, lacking necessary safety measures.
- Adolescents and young adults increasingly seek mental health advice from AI, raising concerns about AI’s influence on vulnerable groups.
- The FDA is scrutinizing AI mental health devices, focusing on content regulation, privacy, and risks like unreported suicidal ideation.
- Human therapists remain essential, and clinicians must stay informed about AI’s role and potential risks in psychiatric care.
See the variety of commentary, news, and happenings in artificial intelligence (AI) and psychiatry throughout this year.
Preliminary Report on Chatbot Iatrogenic Dangers
Allen Frances, MD, reviews the risks AI chatbots pose to both psychiatric patients and average users. Self-harm, suicide, eating disorder, psychosis,...







