Inside Precision Medicine October 11, 2024
Anita Chakraverty

Chatbots powered by artificial intelligence (AI) may be unreliable for providing patient drug information, with a study showing that they supply a considerable number of incorrect or potentially harmful answers.

The findings lead the researchers to suggest that patients and healthcare professionals should exercise caution when using these computer programs, which are designed to mimic human conversations.

Researchers found that the AI-powered chatbot studied, which was embedded into a search engine, provided answers that were of high, but still insufficient quality.

The answers also had low levels of readability, according to the study in BMJ Quality and Safety, and needed on average a degree-level education to be understandable.

“In this cross-sectional study, we observed that search engines with an AI-powered...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Patient / Consumer, Survey / Study, Technology, Trends
Microsoft’s AI Healthcare Push Aims to Boost Efficiency, Enhance Patient Care
Arm touts growing ecosystem of sustainable AI datacenter silicon
ServiceNow BrandVoice: 5 Traits Of AI Pacesetters To Help You Pull Ahead
Revolutionizing Pharma: The Power of AI and Chatbots in Clinical Trials and Beyond
Driving Real Business Value With Generative AI For SMBs And Beyond

Share This Article