VentureBeat January 14, 2025
Pharmaceutical giant GSK is pushing the boundaries of what generative AI can achieve in healthcare areas like scientific literature review, genomic analysis and drug discovery. But it faces a persistent problem with hallucinations, or when AI models generate incorrect or fabricated information: Errors in healthcare are not merely inconvenient; they can have life-altering consequences. Here’s how GSK is tackling it.
The hallucination problem in generative health care
A lot of focus around reducing hallucinations has been applied during the training of a large language model (LLM), or when it is learning from data. But to mitigate hallucinations, GSK instead employs strategies at inference-time, or at the time when a model is actually being used in a real application. These strategies...