Forbes December 18, 2025
In today’s column, I examine the disconcerting emergence of AI meta-hallucinations that can arise when generative AI is providing mental health advice.
Here’s the deal. Suppose that a person asks generative AI such as ChatGPT, Claude, Grok, Gemini, etc., for mental health guidance. The AI dispenses seemingly insightful mental health wisdom, though the user wonders how the AI came up with the psychological diagnosis and corresponding recommendation. They ask the AI to explain itself.
At this juncture, the wheels come off the bus. The explanation is off-the-wall and doesn’t seem to make much sense. How did this happen? The response might be due to a so-called AI meta-hallucination. This is the latest terminology associated with AI hallucinations and refers to...







