DOTmed November 19, 2024
Gus Iversen

A new study from researchers at the Icahn School of Medicine at Mount Sinai has outlined strategies for using large language models (LLMs), such as GPT-4, in health care systems while balancing cost efficiency and performance.

One key strategy identified in the study is grouping up to 50 clinical tasks—such as matching patients to clinical trials, extracting data for research, and identifying candidates for preventive health screenings—into a single batch. This approach allows models to handle tasks simultaneously without significant accuracy loss, reducing API costs by as much as 17-fold. For large health systems, this could translate to substantial annual savings.

The research team, led by Dr. Girish Nadkarni and Dr. Eyal Klang, tested 10 LLMs using real patient data,...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Provider, Survey / Study, Technology, Trends
A Trump 2.0 Agenda For Artificial Intelligence
World Children's Day 2024: Preparing Kids For An AI-Driven Future
Leveraging AI to Reach Occupancy Goals in Senior Living Communities: What You Should Know
8 next big healthcare technologies, per Fast Company
Veritone Introduces Data Refinery, Tackling AI’s Data Drought

Share This Article