Forbes October 21, 2024
Mike Flaxman is VP of Product at HEAVY.AI.
Since generative AI (GenAI) started to take off a couple of years ago, one of its top applications has been helping organizations analyze and learn from their data. Large language models (LLMs) allow users to simply ask a question and quickly get an answer. But there’s always been a major hangup: How can users be sure they’re getting an accurate answer?
Reliability and explicability are basic business requirements for analytics. The challenge is that LLMs can exhibit egregious behavior unless appropriately tuned and constrained. In practice, reliability or accuracy requires alignment between the kinds of questions being asked and the data available to answer them. When an LLM lacks sufficient data to...