Medical Xpress January 21, 2026
Noah Lloyd, Northeastern University

Artificial intelligence is increasingly used as a tool in many health care settings, from writing physicians’ notes to making recommendations in specific cases. Research has found that AI and large language models can reflect racial biases present in their training data, which may influence outputs in ways their users may not realize.

New research out of Northeastern University, posted to the arXiv preprint server, looks past an LLM’s responses to review the data factored into its decisions and decode if race has been problematically deployed in making a recommendation. Employing something called a sparse autoencoder, researchers see a future in which physicians could use this tool to understand when bias is involved in an LLM’s decision-making.

For example, Hiba Ahsan,...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Provider, Technology
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article