Health IT Analytics June 7, 2019
Jessica Kent

AMIA is encouraging the FDA to refine its AI and machine learning regulatory framework across several areas, including bias and cybersecurity.

AMIA is encouraging FDA to modify its regulatory framework for Artificial Intelligence (AI)/Machine Learning (ML)-based software as a medical device (SaMD), particularly in areas of potential bias and cybersecurity risks.

In April 2019, FDA announced that it would develop a framework for regulating AI products that self-update based on new data. Although FDA has authorized other AI products, these products typically use “locked” algorithms that don’t continually adapt or learn each time the algorithm is used.

In response to FDA’s request for feedback, AMIA offered comments on the draft framework, and outlined areas that may need to be...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Cybersecurity, FDA, Govt Agencies, Regulations, Technology
FDA approves Lumicell’s breast cancer imaging tool
Boehringer Ingelheim Strikes Regenerative Med R&D Deal Spanning MASH & More Liver Diseases
Opinion: An FDA pathway can accelerate innovation for Duchenne muscular dystrophy
What principles should guide artificial intelligence innovation in healthcare?
Roche Drug Scores Label Expansion for Earlier Use in Lung Cancer

Share This Article