Inside Precision Medicine September 25, 2024
Jonathan D. Grinstein, PhD

These techniques can improve the evaluation of possible harms to health equity caused by AI responses generated by large language models (LLMs)

When an individual submits a query in a chatbox on a healthcare website, the utmost priority is to avoid offending or discriminating against the user, which could lead to a failure in delivering care.

Google Research published a study in Nature Medicine that lays out tools and techniques to improve the evaluation of possible harms to health equity caused by artificial intelligence (AI) responses generated by large language models (LLMs), like the ones used in Gemini, ChatGPT, and Claude. This method does not provide a comprehensive solution for all text-related AI applications, but it...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Equity/SDOH, Healthcare System, Technology
Apple Intelligence to Expand to Vision Pro Headset in April
Hartford HealthCare taps AI to enhance virtual care access
Meet Tom: AI-Enabled Primary Care as a Service, Built to Scale
Apple Confirms Major Vision Pro Update With Apple Intelligence Coming Soon
Hospitals Are Adopting AI — But Can They Measure and Scale These Tools?

Share This Article