MIT Technology Review May 20, 2024
Rhiannon Williams

Large language models don’t have a theory of mind the way humans do—but they’re getting better at tasks designed to measure it in humans.

Humans are complicated beings. The ways we communicate are multilayered, and psychologists have devised many kinds of tests to measure our ability to infer meaning and understanding from interactions with each other.

AI models are getting better at these tests. New research published today in Nature Human Behavior found that some large language models (LLMs) perform as well as, and in some cases better than, humans when presented with tasks designed to test the ability to track people’s mental states, known as “theory of mind.”

This doesn’t mean AI systems are actually able to...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Mental Health, Provider, Technology
Panic Over DeepSeek Exposes AI's Weak Foundation On Hype
AI-Powered Digital Therapeutics Transform Neurocare for Parkinson's
More ChatGPT Jailbreaks Are Evading Safeguards On Sensitive Topics
Mixture-Of-Experts AI Reasoning Models Suddenly Taking Center Stage Due To China’s DeepSeek Shock-And-Awe
Clever architecture over raw compute: DeepSeek shatters the ‘bigger is better’ approach to AI development

Share This Article