MIT Technology Review September 12, 2024
James O'Donnell

It could assist the company in its efforts to embed AI in more and more of its products.

As long as chatbots have been around, they have made things up. Such “hallucinations” are an inherent part of how AI models work. However, they’re a big problem for companies betting big on AI, like Google, because they make the responses it generates unreliable.

Google is releasing a tool today to address the issue. Called DataGemma, it uses two methods to help large language models fact-check their responses against reliable data and cite their sources more transparently to users.

The first of the two methods is called Retrieval-Interleaved Generation (RIG), which acts as a sort of fact-checker. If a user...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
How HP sees the era of the AI PC | Alex Cho
Who Wins If The New Biden AI Export Rules Stand?
AI's role in stroke care at Orlando Health, 2 years in
WindRose Invests in CIVIE, an AI-Powered Radiology Solutions Provider
Arcadia Launches AI-Powered Precision Medicine Solution

Share This Article