MedCity News August 11, 2024
Katie Adams

Researchers at the University of Massachusetts Amherst released a paper this week showing that large language models tend to hallucinate quite a bit when producing medical summaries.

Researchers at the University of Massachusetts Amherst released a paper this week exploring how often large language models tend to hallucinate when producing medical summaries.

Over the past year or two, healthcare providers have been increasingly leveraging LLMs to alleviate clinician burnout by generating medical summaries. However, the industry still has concerns about hallucinations, which occur when an AI model outputs information that is false or misleading.

For this study, the research team collected 100 medical summaries from OpenAI’s GPT-4o and Meta’s Llama-3 — two up-to-date proprietary and open-source LLMs. The team observed...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Physician, Provider, Technology
Cofactor AI Nabs $4M to Combat Hospital Claim Denials with AI
Set Your Team Up to Collaborate with AI Successfully
What’s So Great About Nvidia Blackwell?
Mayo develops new AI tools
Medtronic, Tempus testing AI to find potential TAVR patients

Share This Article