MedTech Dive June 18, 2024
Elise Reuter

Developing quality assurance practices for AI models should be a priority, said Troy Tazbaz, director of the CDRH’s Digital Health Center of Excellence.

Dive Brief:

  • Less than a week after the Food and Drug Administration published best practices for transparency in machine learning-enabled medical devices, a leader with the Center for Devices and Radiological Health shared more detail on how the agency is thinking about development and quality assurance for artificial intelligence.
  • In a Monday blog post, Troy Tazbaz, director of CDRH’s Digital Health Center of Excellence, said establishing quality assurance practices to ensure AI models are accurate, reliable, ethical and equitable should be a top priority.
  • Tazbaz said solutions include continuous monitoring before, during and after deployment...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), FDA, Govt Agencies, Medical Devices, Technology
FDA warns GLP-1 compounder over safety rules
GLP-1 drug approvals: A breakdown
Rethinking FDA’s Accelerated Approval Pathway: New Draft Guidances and Implications for Drug Companies
FDA approves Novo Nordisk's Ozempic to treat chronic kidney disease in those with diabetes, expanding its use
Certainty vs. speed: How do patients feel about the tradeoff for new cancer drugs?

Share This Article