VentureBeat January 14, 2025
Bryson Masse

Pharmaceutical giant GSK is pushing the boundaries of what generative AI can achieve in healthcare areas like scientific literature review, genomic analysis and drug discovery. But it faces a persistent problem with hallucinations, or when AI models generate incorrect or fabricated information: Errors in healthcare are not merely inconvenient; they can have life-altering consequences. Here’s how GSK is tackling it.

The hallucination problem in generative health care

A lot of focus around reducing hallucinations has been applied during the training of a large language model (LLM), or when it is learning from data. But to mitigate hallucinations, GSK instead employs strategies at inference-time, or at the time when a model is actually being used in a real application. These strategies...

Today's Sponsors

Venturous
Got healthcare questions? Just ask Transcarent

Today's Sponsor

Venturous

 
Topics: AI (Artificial Intelligence), Biotechnology, Pharma, Pharma / Biotech, Technology
The 3 most promising uses for GenAI in healthcare
OpenAI’s $40 Billion And Circle IPO: AI And Blockchain’s Revolution
The Flawed Assumption Behind AI Agents’ Decision-Making
Q&A: Researcher discusses agentic AI, expected to be the next trend in digital medicine
Generative AI Is A Crisis For Copyright Law

Share This Article