MedTech Dive January 22, 2026
Nick Paul Taylor

The nonprofit said technologies like ChatGPT have suggested incorrect diagnoses, invented body parts and otherwise provided information that could lead to harm.

Dive Brief:

  • Misuse of artificial intelligence-powered chatbots in healthcare has topped ECRI’s annual list of the top health technology hazards.
  • The nonprofit ECRI, which shared its list Wednesday, said chatbots built on ChatGPT and other large language models can provide false or misleading information that could result in significant patient harm.
  • ECRI put chatbot misuse ahead of sudden loss of access to electronic systems and the availability of substandard and falsified medical products on its list of the biggest hazards for this year.

Dive Insight:

AI is a long-standing concern for ECRI. Insufficient governance of AI...

Today's Sponsors

Venturous
ZeOmega

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Healthcare System, Safety, Technology
The Download: OpenAI’s plans for science, and chatbot age verification
Around the nation: Amazon's One Medical launches new AI chatbot
Physician assistants say paperwork and AI training still lag
More Data Isn’t Always Better for AI Decisions
The Download: why LLMs are like aliens, and the future of head transplants

Share Article