skip to Main Content
Candid picture of a female boss and business team collaborating. Filtered serie with light flares, bokeh, warm sunny tones. Generative AI.

Generative AI is “reshaping our relationship with knowledge”

Consulting firm Accenture talks about technology “reshaping our relationship with knowledge.”  In a paper entitled “Technology Vision 2024”, Accenture says:

“The search-based ‘librarian’ model of human-data interaction is giving way to a new ‘advisor’ model.  Rather than running searches to curate results, people are now asking generative AI chatbots for answers.”

While ChatGPT is undoubtedly the most widely known generative AI-based question answering tool, it surely isn’t the only one – Google and Microsoft have embedded such capabilities in their consumer-facing internet search engines and many enterprise applications are becoming gen AI enabled.

The future of generative AI

Northern Light warmly embraces Accenture’s vision – today, we help business researchers get instant answers to their market and competitive intelligence questions.  However, Accenture’s vision extends far beyond the evolution of search, ultimately “allowing companies to put [large language model (LLM) advisors] with the breadth of enterprise knowledge at every employee’s fingertips.”

That’s exciting and transformative, and likely just the start.  Accenture envisions “the rise of entire agent ecosystems – large networks of interconnected AI that will push enterprises to think about their intelligence and automation strategy in a fundamentally different way.”

Putting generative AI to work in enterprise applications

For now, however, workers in a corporate environment must walk before they run; they need to learn how to best use the gen AI systems that are being embedded in their enterprise applications today.  The most reliable of these use Retrieval Augmented Generation (RAG), which “ensures that the [underlying large language] model has access to the most current, reliable facts, and that users have access to the model’s sources, ensuring that its claims can be checked for accuracy and ultimately trusted.”  In other words, RAG significantly reduces generative AI’s “hallucination” problem.

In a RAG-based gen AI application, the user may have to select the content the application is referring to in a given interaction, since the point of RAG is to derive better gen AI results by using vetted, authoritative source material rather than generic (aka internet-based) training data.  In addition, there is an art to writing an accurate generative AI prompt that calls for the system to use only specific content to derive its answer, and to write a response in a purely factual manner; you don’t want the AI to get creative in its prose when your purpose is to gather reliable intelligence upon which to base a strategic business decision.

RAG at work in enterprise knowledge management

For example, Northern Light uses generative AI in our SinglePoint™ enterprise knowledge management platform for market and competitive intelligence.  When asking SinglePoint a market research question, a user must select the content collection – e.g., business news, thought leaders, corporate financial reports, etc. – from which they want to derive an answer; this helps to maximize the likelihood that their question can be answered.  Fortunately, in SinglePoint, we have pre-programmed the prompt to interact in a highly specific manner with the large language model – we use GPT 3.5-Turbo – so the user doesn’t have to worry about anything other than asking their question, using plain language, as directly as they can.

This is a micro-example of what Accenture means when they write: “When technologies can better understand us – our behavior and our intentions – they will more effectively adapt to us.”  That notion applies today to generative AI-based question answering and will come to mean far more as gen AI and its applications advance in the years ahead.

Back To Top