Medical Xpress January 11, 2025
Bob Yirka, Medical Xpress

By conducting tests under an experimental scenario, a team of medical researchers and AI specialists at NYU Langone Health has demonstrated how easy it is to taint the data pool used to train LLMs.

For their study published in the journal Nature Medicine, the group generated thousands of articles containing and inserted them into an AI and conducted general LLM queries to see how often the misinformation appeared.

Prior research and anecdotal evidence have shown that the answers given by LLMs such as ChatGPT are not always correct and, in fact, are sometimes wildly off-base. Prior research has also shown that misinformation planted intentionally on well-known internet sites can show up in generalized chatbot queries. In this...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Health System / Hospital, Provider, Survey / Study, Technology, Trends
A novel idea for controlling chatbots
HIMSS25 Keeps the Party Going. Can Everyone Keep Up?
Debunked Episode 13: Will the Real AI Please Stand Up?
Why 35 health systems teamed up on AI
A Bright Future for AI in Pharma

Share This Article