Healthcare Innovation January 21, 2026
Patient safety organization says chatbots can provide valuable assistance but can also provide misleading information that could result in patient harm
Now in its 18th year, nonprofit patient safety organization ECRI’s Top 10 Health Technology Hazards report identifies AI chatbots as the top concern for 2026. No. 2 on this year’s list is unpreparedness for a “digital darkness” event, or a sudden loss of access to electronic systems and patient information.
ECRI says that chatbots that rely on large language models (LLMs) — such as ChatGPT, Claude, Copilot, Gemini, and Grok — produce human-like and expert-sounding responses to users’ questions. It notes that the tools are not regulated as medical devices nor validated for healthcare purposes but are increasingly...







