Forbes December 17, 2025
In today’s column, I examine the role that semantic leakage plays in undermining generative AI and large language models (LLMs).
Here’s the deal. When processing words in a prompt, AI can mistakenly allow some of those words to influence later portions of a conversation, even though the words are not materially relevant to the chat at that later juncture. The semantic meaning of a given word can inadvertently leak into a dialogue context at the wrong time and in the wrong way.
This has a particularly untoward impact when a discussion about mental health is underway. A user might be presented with AI-generated mental health advice that was incorrectly influenced by a prior word or set of words. Insidiously, it...







