MedCity News January 12, 2021
AI tools increasingly occupy a regulatory gray area in healthcare. For clinicians to assess whether they are trustworthy, they need transparency on how they work, said panelists at CES.
A host of AI tools have been quickly developed to respond to the Covid-19 pandemic, from algorithms to screen lung x-rays for signs of Covid-19, to triage tools to predict which patients will become critically ill. But how do we know they are working as planned?
Transparency in how these tools are developed and their intended use is critical, experts said at a virtual panel at CES.
“You can’t just set a piece of software in front of somebody and say, ‘trust me,’ particularly if they need to make decisions based...