Healthcare IT News May 13, 2022
Kat Jercich

Researchers from Massachusetts General Hospital note that models can be biased against some groups while performing better for others.

Researchers at Massachusetts General Hospital say that spotting bias in artificial intelligence and machine learning requires a holistic evaluation – and that models can be biased against certain groups while simultaneously performing better for others.

“Despite the eminent work in other fields, bias often remains unmeasured or partially measured in healthcare domains,” observed the researchers in the study, which was published this week in the Journal of the American Medical Informatics Association.

“Most published research articles only provide information about very few performance metrics,” they added. “The few studies that officially aim at addressing bias usually utilize single measures...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Health System / Hospital, Provider, Survey / Study, Technology, Trends
How healthcare's view of AI has shifted
The Power of Drug Discovery with Philip Tagari
Bite-Sized AI: Why Smaller Models Like Microsoft’s Phi-3 Are Big for Business
Coventry Uni uses AI-generated avatars to train medical students
Lessons on generative AI in medicine from Stanford University

Share This Article