Medical Xpress January 11, 2025
Bob Yirka, Medical Xpress

By conducting tests under an experimental scenario, a team of medical researchers and AI specialists at NYU Langone Health has demonstrated how easy it is to taint the data pool used to train LLMs.

For their study published in the journal Nature Medicine, the group generated thousands of articles containing and inserted them into an AI and conducted general LLM queries to see how often the misinformation appeared.

Prior research and anecdotal evidence have shown that the answers given by LLMs such as ChatGPT are not always correct and, in fact, are sometimes wildly off-base. Prior research has also shown that misinformation planted intentionally on well-known internet sites can show up in generalized chatbot queries. In this...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Health System / Hospital, Provider, Survey / Study, Technology, Trends
How Generative AI Affects Teamwork at Companies
The AI Data Divide: Why Intelligence, Not Algorithms, Will Determine Enterprise AI Success
The Future Of Healthcare: Three Shifts That Demand Action
How Health Plans Are Adding Up Real Value From AI
FHC #170: AI could save 500,000 lives, $1.5 trillion a year, says Dr. Pearl

Share This Article