Inside Precision Medicine September 25, 2024
Jonathan D. Grinstein, PhD

These techniques can improve the evaluation of possible harms to health equity caused by AI responses generated by large language models (LLMs)

When an individual submits a query in a chatbox on a healthcare website, the utmost priority is to avoid offending or discriminating against the user, which could lead to a failure in delivering care.

Google Research published a study in Nature Medicine that lays out tools and techniques to improve the evaluation of possible harms to health equity caused by artificial intelligence (AI) responses generated by large language models (LLMs), like the ones used in Gemini, ChatGPT, and Claude. This method does not provide a comprehensive solution for all text-related AI applications, but it...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Equity/SDOH, Healthcare System, Technology
‘Humphrey’ AI tool launched to streamline NHS and public services
Oracle shares jump 7% on involvement in AI infrastructure initiative that Trump will announce
Cofactor AI Launches Platform to Help Hospitals Fight Tidal Wave of Claims Denials and Announces $4 Million Seed Round
5 Healthcare AI Trends in 2025: Balancing Innovation and Patient Safety
Do We Need Humans in the Loop? A Novo Nordisk Exec Weighs In (Video)

Share This Article