Forbes October 21, 2024
Mike Flaxman

Mike Flaxman is VP of Product at HEAVY.AI.

Since generative AI (GenAI) started to take off a couple of years ago, one of its top applications has been helping organizations analyze and learn from their data. Large language models (LLMs) allow users to simply ask a question and quickly get an answer. But there’s always been a major hangup: How can users be sure they’re getting an accurate answer?

Reliability and explicability are basic business requirements for analytics. The challenge is that LLMs can exhibit egregious behavior unless appropriately tuned and constrained. In practice, reliability or accuracy requires alignment between the kinds of questions being asked and the data available to answer them. When an LLM lacks sufficient data to...

Today's Sponsors

LEK
ZeOmega

Today's Sponsor

LEK

 
Topics: AI (Artificial Intelligence), Technology
Google digs deeper into healthcare AI: 5 notes
JP Morgan Annual Healthcare Conference 2025: What are the key talking points likely to be?
How AI Has And Will Continue To Transform Healthcare
AI Translates Nature into New Medicines | StartUp Health Insights: Week of Nov 26, 2024
Building AI trust: The key role of explainability

Share This Article