Visual Capitalist January 10, 2025
As AI-powered tools and applications become more integrated into our daily lives, it’s important to keep in mind that models may sometimes generate incorrect information.
This phenomenon, known as “hallucinations,” is described by IBM as occurring when a large language model (LLM)—such as a generative AI chatbot or computer vision tool—detects patterns or objects that do not exist or are imperceptible to humans, leading to outputs that are inaccurate or nonsensical.
This chart visualizes the top 15 AI large language models with the lowest hallucination rates.
The hallucination rate is the frequency that an LLM generates false or unsupported information in its outputs.
The data comes from Vectara and is updated as of Dec. 11, 2024. Hallucination rates were calculated...