STAT January 9, 2023
Enes Hosgor and Oguz Akin

Although artificial intelligence is entering health care with great promise, clinical AI tools are prone to bias and real-world underperformance from inception to deployment, including the stages of dataset acquisition, labeling or annotating, algorithm training, and validation. These biases can reinforce existing disparities in diagnosis and treatment.

To explore how well bias is being identified in the FDA review process, we looked at virtually every health care AI product approved between 1997 and October 2022. Our audit of data submitted to the FDA to clear clinical AI products for the market reveals major flaws in how this technology is being regulated.

Our analysis

The FDA has approved 521 AI products between 1997 and October 2022: 500 under the 510(k) pathway,...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), FDA, Govt Agencies, Technology
77% of COOs Using GenAI Report Positive ROI, With Customized Tools Leading the Way
Anthropic's Claude: The AI Junior Employee Transforming Business
AI Teaches Surgery; AI Quality Registry; AI Matches Patients With Trials
Alibaba releases Qwen with Questions, an open reasoning model that beats o1-preview
Trump’s New AI Czar: Game-Changer Or A Pandora’s Box?

Share This Article