Medical Xpress January 11, 2025
Bob Yirka, Medical Xpress

By conducting tests under an experimental scenario, a team of medical researchers and AI specialists at NYU Langone Health has demonstrated how easy it is to taint the data pool used to train LLMs.

For their study published in the journal Nature Medicine, the group generated thousands of articles containing and inserted them into an AI and conducted general LLM queries to see how often the misinformation appeared.

Prior research and anecdotal evidence have shown that the answers given by LLMs such as ChatGPT are not always correct and, in fact, are sometimes wildly off-base. Prior research has also shown that misinformation planted intentionally on well-known internet sites can show up in generalized chatbot queries. In this...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Health System / Hospital, Provider, Survey / Study, Technology, Trends
Anthropic’s chief scientist on 5 ways agents will be even better in 2025
3 AI Myths That 2025 Will Debunk
Why Everyone — From Technologists To Creatives —Needs To Guide AI Design
AI tool assists doctors in sharing lab results
Anthropic’s New Deal Illustrates How Big AI Will Be In 2025

Share This Article