MedCity News January 12, 2021
Elise Reuter

AI tools increasingly occupy a regulatory gray area in healthcare. For clinicians to assess whether they are trustworthy, they need transparency on how they work, said panelists at CES.

A host of AI tools have been quickly developed to respond to the Covid-19 pandemic, from algorithms to screen lung x-rays for signs of Covid-19, to triage tools to predict which patients will become critically ill. But how do we know they are working as planned?

Transparency in how these tools are developed and their intended use is critical, experts said at a virtual panel at CES.

“You can’t just set a piece of software in front of somebody and say, ‘trust me,’ particularly if they need to make decisions based...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Conferences / Podcast, Provider, Technology, Trends
‘Think about the hype’ - AI holds disruptive potential for health care
Will Synthetic, AI-Based Digital Humans Change Pharma and Life Sciences? Q&A with Abid Rahman, SVP Innovation, EVERSANA
Investigators Train AI Systems to Predict RA Outcomes
Confronting the Digital Dilemma in Healthcare’s Quest for Innovation
NIH develops AI tool to better pair cancer patients with drugs

Share This Article