Inside Precision Medicine October 11, 2024
Anita Chakraverty

Chatbots powered by artificial intelligence (AI) may be unreliable for providing patient drug information, with a study showing that they supply a considerable number of incorrect or potentially harmful answers.

The findings lead the researchers to suggest that patients and healthcare professionals should exercise caution when using these computer programs, which are designed to mimic human conversations.

Researchers found that the AI-powered chatbot studied, which was embedded into a search engine, provided answers that were of high, but still insufficient quality.

The answers also had low levels of readability, according to the study in BMJ Quality and Safety, and needed on average a degree-level education to be understandable.

“In this cross-sectional study, we observed that search engines with an AI-powered...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Patient / Consumer, Survey / Study, Technology, Trends
Generative AI Is Helping To Clear Up Brain Fog
Getting started with AI agents (part 1): Capturing processes, roles and connections
Unlocking The Genetic Code: AI Reveals New Insights Into Psychiatric Disorders
5 questions for the Abundance Institute's Neil Chilson
AI agents are unlike any technology ever

Share This Article