Inside Precision Medicine September 25, 2024
Jonathan D. Grinstein, PhD

These techniques can improve the evaluation of possible harms to health equity caused by AI responses generated by large language models (LLMs)

When an individual submits a query in a chatbox on a healthcare website, the utmost priority is to avoid offending or discriminating against the user, which could lead to a failure in delivering care.

Google Research published a study in Nature Medicine that lays out tools and techniques to improve the evaluation of possible harms to health equity caused by artificial intelligence (AI) responses generated by large language models (LLMs), like the ones used in Gemini, ChatGPT, and Claude. This method does not provide a comprehensive solution for all text-related AI applications, but it...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Equity/SDOH, Healthcare System, Technology
Getting started with AI agents (part 1): Capturing processes, roles and connections
Unlocking The Genetic Code: AI Reveals New Insights Into Psychiatric Disorders
5 questions for the Abundance Institute's Neil Chilson
AI agents are unlike any technology ever
Amazon Increases Total Investment in AI Startup Anthropic to $8 Billion

Share This Article