VentureBeat January 14, 2025
Bryson Masse

Pharmaceutical giant GSK is pushing the boundaries of what generative AI can achieve in healthcare areas like scientific literature review, genomic analysis and drug discovery. But it faces a persistent problem with hallucinations, or when AI models generate incorrect or fabricated information: Errors in healthcare are not merely inconvenient; they can have life-altering consequences. Here’s how GSK is tackling it.

The hallucination problem in generative health care

A lot of focus around reducing hallucinations has been applied during the training of a large language model (LLM), or when it is learning from data. But to mitigate hallucinations, GSK instead employs strategies at inference-time, or at the time when a model is actually being used in a real application. These strategies...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Biotechnology, Pharma, Pharma / Biotech, Technology
The winner of today's AI race might be a company that doesn't exist yet
How DeepSeek And ‘Knowledge Distillation’ Will Reshape Medicine
GenAI’s unexpected impact: Disrupting high-skilled tech jobs, too
What happens to the data that doctors enter in ChatGPT?
Demystifying AI in healthcare: From innovation to implementation

Share This Article