Medical Economics April 25, 2024
Todd Shryock

7% of messages generated by AI were deemed unsafe

In a study published in Lancet Digital Health, researchers from Mass General Brigham showed the promising role of Large Language Models (LLMs) in reducing physician workload and enhancing patient education. However, the study underscores the necessity of vigilant oversight due to potential risks associated with LLM-generated communications.

Physicians today face mounting administrative burdens, contributing significantly to burnout rates. To address this challenge, electronic health record (EHR) vendors have increasingly turned to generative AI algorithms to assist clinicians in composing patient messages. Despite the potential efficiency gains, questions lingered regarding safety and clinical impact.

Lead author Dr. Danielle Bitterman, from the Artificial Intelligence in Medicine (AIM) Program at Mass General Brigham, said...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Physician, Provider, Survey / Study, Technology, Trends
Apple AI Could Produce ‘Really Really Good’ Version of Siri
Warren Buffett Warns of AI Use in Scams
What’s the future of AI?
Research Shows Generative AI In The EHR Can Work Well, But Only With Human Oversight
Hong Kong big data utilised for building predictive AI and more AI briefs

Share This Article