Health Care Blog December 12, 2019
One big theme in AI research has been the idea of interpretability. How should AI systems explain their decisions to engender trust in their human users? Can we trust a decision if we don’t understand the factors that informed it?
I’ll have a lot more to say on the latter question some other time, which is philosophical rather than technical in nature, but today I wanted to share some of our research into the first question. Can our models explain their decisions in a way that can convince humans to trust them?
Decisions, decisions
I am a radiologist, which makes me something of an expert in the field of human image analysis. We are often asked to explain our assessment...