MedCity News August 11, 2024
Katie Adams

Researchers at the University of Massachusetts Amherst released a paper this week showing that large language models tend to hallucinate quite a bit when producing medical summaries.

Researchers at the University of Massachusetts Amherst released a paper this week exploring how often large language models tend to hallucinate when producing medical summaries.

Over the past year or two, healthcare providers have been increasingly leveraging LLMs to alleviate clinician burnout by generating medical summaries. However, the industry still has concerns about hallucinations, which occur when an AI model outputs information that is false or misleading.

For this study, the research team collected 100 medical summaries from OpenAI’s GPT-4o and Meta’s Llama-3 — two up-to-date proprietary and open-source LLMs. The team observed...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Physician, Provider, Technology
Where healthcare AI startups are raising venture capital funding
Report: OpenAI Must Alter Structure to Reach $150 Billion Valuation
OpenAI’s New o1 Model Leverages Chain-Of-Thought Double-Checking To Reduce AI Hallucinations And Boost AI Safety
Why AI accountability in healthcare is essential for business success | Viewpoint
Google Makes Gemini Live AI Chatbot Available to All Android Users

Share This Article