STAT June 28, 2022
Katie Palmer

As more machine learning tools reach patients, developers are starting to get smart about the potential for bias to seep in. But a growing body of research aims to emphasize that even carefully trained models — ones built to ignore race — can breed inequity in care.

Researchers at the Massachusetts Institute of Technology and IBM Research recently showed that algorithms based on clinical notes — the free-form text providers jot down during patient visits — could predict the self-identified race of a patient, even when the data had been stripped of explicit mentions of race. It’s a clear sign of a big problem: Race is so deeply embedded in clinical information that straightforward approaches like race redaction won’t cut...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Equity/SDOH, Healthcare System, Technology
How healthcare's view of AI has shifted
The Power of Drug Discovery with Philip Tagari
Bite-Sized AI: Why Smaller Models Like Microsoft’s Phi-3 Are Big for Business
Coventry Uni uses AI-generated avatars to train medical students
Lessons on generative AI in medicine from Stanford University

Share This Article