skip to Main Content

The Case for Using Retrieval Augmented Generation in Generative AI Applications within the Enterprise

There is growing recognition across the business world that the technique known as “retrieval augmented generation” (RAG) is foundational to the effective use of generative AI (GenAI) within enterprise applications.

How retrieval augmented generation combats GenAI “hallucination”

A paper by the Boston Consulting Group’s BCG Platinion division makes the case clearly.  Entitled “Maximizing Business Insights with Retrieval Augmented Generation: How LLMs enhance enterprise question answering,” the paper explains that “GenAI has been known to exhibit highly realistic hallucinated data in its responses when it is not accompanied by factual context.  Integration with existing enterprise data can mitigate these hallucinations by providing the necessary context for factual question answering.”

In other words, the accuracy of GenAI-produced responses depends entirely upon the validity of the source material that the GenAI draws upon for its answers.  As BCG Platinion puts it, “RAG depends on the quality and coverage of the text corpus used for retrieval… To enable answering factual questions, we could insert all relevant documents. However, we would quickly hit limitations imposed by a maximum context window size that models support.

“Obviously, this would not be enough for an enterprise with terabytes of relevant data even if using GPT-4’s latest 128k context window, which amounts to roughly 200 single-space pages.  [RAG] solves this problem by loading only the relevant pieces of information needed to answer the question or fulfil the task.”

Northern Light rides the retrieval augmented generation train

Northern Light wholeheartedly agrees with BCG’s views on the power and necessity of retrieval augmented generation for enterprise GenAI applications.  That’s why our SinglePoint™ enterprise knowledge management platform for market and competitive intelligence employs RAG in its embedded question answering feature.

We use OpenAI’s GPT 3.5 Turbo large language model (LLM) as the basis for our GenAI question answering solution.  It’s perfectly fine to use a commercial LLM that has learned “the patterns and structure [of language] from an immense volume of data they have been exposed to.”  (The LLM is basically a predictive text generator, making an informed guess as to what word most likely comes next in a coherent sentence.)  The key is, for any given inquiry, the answer is not reliant upon the LLM’s training data, which is drawn from the internet at large and therefore contains much dubious content.  RAG limits and controls the content upon which an answer is based.

How does retrieval augmented generation work?

As BCG Platinion explains, “The basic RAG architecture is relatively straightforward. The information to be used is contained within the documents or records stored in existing corporate data sources. It is then indexed and retrieved in a structured manner as needed. Each of these processes, however, are critical to success. Indexing, for example, essentially maps out a vast sea of data into a navigable landscape.”

Northern Light has mastered the art of aggregating reliable, vetted sources of business research for market and competitive intelligence, which is the domain of SinglePoint platforms.  Applying RAG to those content collections, GenAI in SinglePoint is a powerful tool to jump-start the research process.

Back To Top