VentureBeat January 14, 2025
Bryson Masse

Pharmaceutical giant GSK is pushing the boundaries of what generative AI can achieve in healthcare areas like scientific literature review, genomic analysis and drug discovery. But it faces a persistent problem with hallucinations, or when AI models generate incorrect or fabricated information: Errors in healthcare are not merely inconvenient; they can have life-altering consequences. Here’s how GSK is tackling it.

The hallucination problem in generative health care

A lot of focus around reducing hallucinations has been applied during the training of a large language model (LLM), or when it is learning from data. But to mitigate hallucinations, GSK instead employs strategies at inference-time, or at the time when a model is actually being used in a real application. These strategies...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Biotechnology, Pharma, Pharma / Biotech, Technology
‘Humphrey’ AI tool launched to streamline NHS and public services
Oracle shares jump 7% on involvement in AI infrastructure initiative that Trump will announce
Cofactor AI Launches Platform to Help Hospitals Fight Tidal Wave of Claims Denials and Announces $4 Million Seed Round
5 Healthcare AI Trends in 2025: Balancing Innovation and Patient Safety
Do We Need Humans in the Loop? A Novo Nordisk Exec Weighs In (Video)

Share This Article