Medical Xpress September 24, 2024
Adam Zewe, Massachusetts Institute of Technology

AI systems are increasingly being deployed in safety-critical health care situations. Yet these models sometimes hallucinate incorrect information, make biased predictions, or fail for unexpected reasons, which could have serious consequences for patients and clinicians.

In a commentary article published today in Nature Computational Science, MIT Associate Professor Marzyeh Ghassemi and Boston University Associate Professor Elaine Nsoesie argue that to mitigate these potential harms, AI systems should be accompanied by responsible-use labels, similar to U.S. Food and Drug Administration-mandated labels that are placed on prescription medications.

MIT News spoke with Ghassemi about the need for such labels, the information they should convey, and how labeling procedures could be implemented.

Why do we need responsible use labels for AI systems in...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Interview / Q&A, Technology, Trends
From 2025 Market Overview: How the tech market for older adults evolves
What Would The Ideal Hospital Look Like? - 2
How AI is transforming medicine faster than ever before
Trump's new tech era: AI, crypto, social media divide and deals galore
AI in medtech is taking off. Here are 4 trends to watch in 2025.

Share This Article