Forbes December 17, 2025
Lance Eliot

In today’s column, I examine the role that semantic leakage plays in undermining generative AI and large language models (LLMs).

Here’s the deal. When processing words in a prompt, AI can mistakenly allow some of those words to influence later portions of a conversation, even though the words are not materially relevant to the chat at that later juncture. The semantic meaning of a given word can inadvertently leak into a dialogue context at the wrong time and in the wrong way.

This has a particularly untoward impact when a discussion about mental health is underway. A user might be presented with AI-generated mental health advice that was incorrectly influenced by a prior word or set of words. Insidiously, it...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Mental Health, Provider, Technology
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs
Yann LeCun On Artificial General Intelligence And The Digital Commons
Anthropic closes latest funding round above $10 billion and could go higher, sources say
Infographic: ECRI’s Top 10 Tech Hazards of 2026
Doctors Increasingly See AI Scribes in a Positive Light. But Hiccups Persist.

Share Article