Medical Xpress September 24, 2024
Adam Zewe, Massachusetts Institute of Technology

AI systems are increasingly being deployed in safety-critical health care situations. Yet these models sometimes hallucinate incorrect information, make biased predictions, or fail for unexpected reasons, which could have serious consequences for patients and clinicians.

In a commentary article published today in Nature Computational Science, MIT Associate Professor Marzyeh Ghassemi and Boston University Associate Professor Elaine Nsoesie argue that to mitigate these potential harms, AI systems should be accompanied by responsible-use labels, similar to U.S. Food and Drug Administration-mandated labels that are placed on prescription medications.

MIT News spoke with Ghassemi about the need for such labels, the information they should convey, and how labeling procedures could be implemented.

Why do we need responsible use labels for AI systems in...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Interview / Q&A, Technology, Trends
OpenAI’s GPT-5 Model Reportedly Behind Schedule With Uncertain Future
10 AI Predictions For 2025
Three Practical Reasons To Consider AI Agents For Your Organization
Dexcom Adds Generative AI Platform to Its Over-the-Counter CGM
My Medical AI Holiday Wish List

Share This Article