Visual Capitalist January 10, 2025
Kayla Zhu

As AI-powered tools and applications become more integrated into our daily lives, it’s important to keep in mind that models may sometimes generate incorrect information.

This phenomenon, known as “hallucinations,” is described by IBM as occurring when a large language model (LLM)—such as a generative AI chatbot or computer vision tool—detects patterns or objects that do not exist or are imperceptible to humans, leading to outputs that are inaccurate or nonsensical.

This chart visualizes the top 15 AI large language models with the lowest hallucination rates.

The hallucination rate is the frequency that an LLM generates false or unsupported information in its outputs.

The data comes from Vectara and is updated as of Dec. 11, 2024. Hallucination rates were calculated...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Self-invoking code benchmarks help you decide which LLMs to use for your programming tasks
How Artificial Intelligence Is Transforming The Job Market: A Guide To Adaptation And Career Transformation
Exploring Practical LLM Research In Class At MIT
HHS Unveils AI Strategic Plan for Healthcare, Human Services and Public Health
The Prototype: Study Suggests AI Tools Decrease Critical Thinking Skills

Share This Article