Forbes January 23, 2026
In today’s column, I examine important research that offers a new twist on how generative AI and large language models (LLMs) can become collaborators in helping users concoct delusions and otherwise pursue adverse mental health avenues.
The usual assumption has been that if a user overtly instructs AI to act as a delusion-invoking collaborator, the AI simply obeys those commands. The AI is compliant. Another similar assumption is that since LLMs are tuned by AI makers to be sycophantic, the AI might computationally be gauging that the best way to make the user feel good is by going along with a delusion-crafting chat. The user doesn’t need to explicitly say they want to have help with creating a delusion. Instead,...







