MedTech Dive July 25, 2024
Nick Paul Taylor

Researchers at the National Institutes of Health found that a version of ChatGPT analyzed images at an expert level but frequently reached the right answer with incorrect reasoning.

Dive Brief:

  • A recent study showed that a version of ChatGPT analyzes medical images at an expert level but frequently reached the right answer with incorrect reasoning.
  • The results, which were published Tuesday in the peer-reviewed journal npj Digital Medicine, show OpenAI’s artificial intelligence GPT-4 with vision is as good at answering multiple-choice questions about medical images as physicians who lack access to external resources.
  • However, the model made mistakes in image comprehension, while still reaching the right answer, 27% of the time. The researchers said the errors show...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Provider, Survey / Study, Technology, Trends
Ingenious Self-Ask Prompting Technique Boosts Generative AI
How AI is making copyright issues more complicated | Devcom panel
Artificial intelligence method could advance gene mutation prediction in lung cancer
Yair Lotan, MD, on ethical considerations for AI in urology
Promise and Perils of AI in Medicine

Share This Article