AI in Healthcare March 26, 2024
ChatGPT is only so-so at letting physicians know if any given clinical study is relevant to their patient rosters and, as such, deserving of a full, time-consuming read. On the other hand, the popular chatbot’s study summaries are an impressive 70% shorter than human-authored study abstracts—and ChatGPT pulls this off without sacrificing quality or accuracy and while maintaining low levels of bias.
These are the findings of researchers in family medicine and community health at the University of Kansas. Corresponding author Daniel Parente, MD, PhD, and colleagues tested the large language model’s summarization chops on 140 study abstracts published in 14 peer-reviewed journals.
Finally, while at it, the researchers developed software—“pyJournalWatch”— to help primary care providers quickly but thoughtfully review...