Medical Xpress January 21, 2026
Artificial intelligence is increasingly used as a tool in many health care settings, from writing physicians’ notes to making recommendations in specific cases. Research has found that AI and large language models can reflect racial biases present in their training data, which may influence outputs in ways their users may not realize.
New research out of Northeastern University, posted to the arXiv preprint server, looks past an LLM’s responses to review the data factored into its decisions and decode if race has been problematically deployed in making a recommendation. Employing something called a sparse autoencoder, researchers see a future in which physicians could use this tool to understand when bias is involved in an LLM’s decision-making.
For example, Hiba Ahsan,...







