Physicians Practice January 16, 2026
Neil Baum, MD Fact checked by: Keith A. Reynolds

AI in health care poses risks, including misinformation and bias. Physicians must verify AI responses to ensure patient safety and accurate care.

I have previously written about the perils of AI hallucinations in the health care setting. I’d like to share an example of AI and the importance of verifying ChatGPT responses.

A friend, the customer service expert and a New York Times bestselling author, Shep Hyken, was in attendance at a meeting in Malaysia and one of the speakers asked ChatGPT the following question: “I just bought a pair of shoes from Amazon, and they messed up the order. The right-foot shoe was for the left foot, and the left-foot shoe was for my right foot. What should I...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Technology
AI-enabled clinical data abstraction: a nurse’s perspective
Contextual AI launches Agent Composer to turn enterprise RAG into production-ready AI agents
OpenAI’s latest product lets you vibe code science
WISeR in 2026: Legal, Compliance, and AI Challenges That Could Reshape Prior Authorization for Skin Substitutes
Dario Amodei warns AI may cause ‘unusually painful’ disruption to jobs

Share Article