VentureBeat June 20, 2024
Four state-of-the-art large language models (LLMs) are presented with an image of what looks like a mauve-colored rock. It’s actually a potentially serious tumor of the eye — and the models are asked about its location, origin and possible extent.
LLaVA-Med identifies the malignant growth as in the inner lining of the cheek (wrong), while LLaVA says it’s in the breast (even more wrong). GPT-4V, meanwhile, offers up a long-winded, vague response, and can’t identify where it is at all.
But PathChat, a new pathology-specific LLM, correctly pegs the tumor to the eye, informing that it can be significant and lead to vision loss.
Developed in the Mahmood Lab at Brigham and Women’s Hospital, PathChat represents a...