MIT Technology Review May 20, 2024
Large language models don’t have a theory of mind the way humans do—but they’re getting better at tasks designed to measure it in humans.
Humans are complicated beings. The ways we communicate are multilayered, and psychologists have devised many kinds of tests to measure our ability to infer meaning and understanding from interactions with each other.
AI models are getting better at these tests. New research published today in Nature Human Behavior found that some large language models (LLMs) perform as well as, and in some cases better than, humans when presented with tasks designed to test the ability to track people’s mental states, known as “theory of mind.”
This doesn’t mean AI systems are actually able to...