Visual Capitalist January 10, 2025
Kayla Zhu

As AI-powered tools and applications become more integrated into our daily lives, it’s important to keep in mind that models may sometimes generate incorrect information.

This phenomenon, known as “hallucinations,” is described by IBM as occurring when a large language model (LLM)—such as a generative AI chatbot or computer vision tool—detects patterns or objects that do not exist or are imperceptible to humans, leading to outputs that are inaccurate or nonsensical.

This chart visualizes the top 15 AI large language models with the lowest hallucination rates.

The hallucination rate is the frequency that an LLM generates false or unsupported information in its outputs.

The data comes from Vectara and is updated as of Dec. 11, 2024. Hallucination rates were calculated...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Technology
New AI Tool Boosts Detection of Airway Nodules
To Deliver Meaningful Business Value, AI Must Grasp Context
How to bring AI to community hospitals
Healthcare AI newswatch: Ambient AI costs, healthcare AI holdouts, an 86-year-old AI innovator, more
How Middle Market Companies Can Approach An AI Strategy

Share This Article