Health Care Blog December 12, 2019
Luke Oakden-Rayner

One big theme in AI research has been the idea of interpretability. How should AI systems explain their decisions to engender trust in their human users? Can we trust a decision if we don’t understand the factors that informed it?

I’ll have a lot more to say on the latter question some other time, which is philosophical rather than technical in nature, but today I wanted to share some of our research into the first question. Can our models explain their decisions in a way that can convince humans to trust them?

Decisions, decisions

I am a radiologist, which makes me something of an expert in the field of human image analysis. We are often asked to explain our assessment...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), EMR / EHR, Health IT, HIE (Interoperability), Provider, Radiology, Technology
ONC @ 20: A Tale of Optimism and Humility
Roundtable: How can APIs drive effectiveness and interoperability in the NHS?
Most interoperability advances are evolving under the surface
EHDS Series - 4: The European Health Data Space’s Implications for “Wellness Applications” and Medical Devices
How to Extend the Reach of Your Hospital’s EHR

Share This Article