Forbes January 23, 2026
Lance Eliot

In today’s column, I examine important research that offers a new twist on how generative AI and large language models (LLMs) can become collaborators in helping users concoct delusions and otherwise pursue adverse mental health avenues.

The usual assumption has been that if a user overtly instructs AI to act as a delusion-invoking collaborator, the AI simply obeys those commands. The AI is compliant. Another similar assumption is that since LLMs are tuned by AI makers to be sycophantic, the AI might computationally be gauging that the best way to make the user feel good is by going along with a delusion-crafting chat. The user doesn’t need to explicitly say they want to have help with creating a delusion. Instead,...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Technology
The Download: OpenAI’s plans for science, and chatbot age verification
Around the nation: Amazon's One Medical launches new AI chatbot
The Medical Futurist’s 100 Digital Health And AI Companies Of 2026
Physician assistants say paperwork and AI training still lag
More Data Isn’t Always Better for AI Decisions

Share Article