HCP Live July 26, 2024
Connor Iapoce

Across 40 clinical scenarios, ChatGPT did not provide a comprehensive response in 50% of clinical questions with nearly 30% hallucinated sources.

ChatGPT answered correctly in more than 80% of complex open-ended vitreoretinal clinical scenarios but demonstrated a reduced capability to offer a comprehensive response, according to data presented at the American Society of Retina Specialists (ASRS) 42nd Annual Meeting.1

Across the 40 open-ended clinical scenarios, the artificial intelligence (AI) chatbot was incapable of a comprehensive response in approximately 50% of clinical questions and generated nearly 30% hallucinated sources. Hallucinations occur when a large language model (LLM) produces nonsensical or inaccurate responses presented as factual.2

“This demonstrates that while ChatGPT is rapidly growing more accurate, it is not yet suitable as...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Provider, Technology
Ingenious Self-Ask Prompting Technique Boosts Generative AI
How AI is making copyright issues more complicated | Devcom panel
Artificial intelligence method could advance gene mutation prediction in lung cancer
Yair Lotan, MD, on ethical considerations for AI in urology
Promise and Perils of AI in Medicine

Share This Article