VentureBeat June 20, 2024
Taryn Plumb

Four state-of-the-art large language models (LLMs) are presented with an image of what looks like a mauve-colored rock. It’s actually a potentially serious tumor of the eye — and the models are asked about its location, origin and possible extent.

LLaVA-Med identifies the malignant growth as in the inner lining of the cheek (wrong), while LLaVA says it’s in the breast (even more wrong). GPT-4V, meanwhile, offers up a long-winded, vague response, and can’t identify where it is at all.

But PathChat, a new pathology-specific LLM, correctly pegs the tumor to the eye, informing that it can be significant and lead to vision loss.

Developed in the Mahmood Lab at Brigham and Women’s Hospital, PathChat represents a...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Physician, Provider, Technology
Google digs deeper into healthcare AI: 5 notes
JP Morgan Annual Healthcare Conference 2025: What are the key talking points likely to be?
How AI Has And Will Continue To Transform Healthcare
AI Translates Nature into New Medicines | StartUp Health Insights: Week of Nov 26, 2024
Building AI trust: The key role of explainability

Share This Article