DOTmed November 19, 2024
A new study from researchers at the Icahn School of Medicine at Mount Sinai has outlined strategies for using large language models (LLMs), such as GPT-4, in health care systems while balancing cost efficiency and performance.
One key strategy identified in the study is grouping up to 50 clinical tasks—such as matching patients to clinical trials, extracting data for research, and identifying candidates for preventive health screenings—into a single batch. This approach allows models to handle tasks simultaneously without significant accuracy loss, reducing API costs by as much as 17-fold. For large health systems, this could translate to substantial annual savings.
The research team, led by Dr. Girish Nadkarni and Dr. Eyal Klang, tested 10 LLMs using real patient data,...