In generative AI, last month seems like a decade ago, given how quickly the market and technology is evolving.
“Way back” in February, we wrote about the paucity of available libraries or APIs that developers like Northern Light could use to access the large language models for private applications – specifically, our SinglePoint™ knowledge management platform for market and competitive intelligence – that could be integrated into secure enterprise workflows. Furthermore, there were no indications of pricing models. These facts applied equally to OpenAI, Microsoft, and Google. We expressed concern that Microsoft had a history of charging seven figures for its AI technology (which is why no one ever used it) and that Google had a history of open sourcing theirs (which is why everyone used it). At the time, it was unclear if OpenAI, the developer of ChatGPT, had retained any commercialization rights independent of Microsoft.
Well, it turns out they have. The biggest change that has occurred is that OpenAI exposed a developer API for an important model and announced pricing plans with metrics regarding various types of capacity limits.
Northern Light started working with the OpenAI API the day these changes were announced. We’re using OpenAI’s GPT 3.5 Turbo model, which has the merits of being faster than other OpenAI models and is impressively accurate when using text from high quality market and competitive intelligence content of the type that Northern Light provides to its clients. Most importantly, it has a somewhat production-ready enterprise API that we can use. We know the pricing and it is affordable.
Now, fast-forward a month, and lo and behold GPT-4 has just been released. OpenAI claims that, while it is slower, it is more accurate in its generated text. Our impression in ad hoc testing of GPT-4 is that it lives up to the reputation of being slower but is more accurate. Speed is a very big issue, as users get antsy when it takes a long time for the search result to appear. (ChatGPT placates the user by feeding the text one word at a time, so the user has something to watch while waiting for the complete response to form.)
At present, the best combination of speed, accuracy, API availability, and pricing is GPT 3.5 Turbo. Our tests indicate that the generated text is very good at presenting the essence of the content’s insights and intelligence about the prompt. And it reads very nicely as a coherent piece. While the response of the API is not snappy, and search results are not formed as fast as we would like, our conclusion is that what the OpenAI API can do at the present time is a significant improvement in the user experience of search applications.
Switching to another model supported by the API is not a big deal, so if GPT-4 or some future model becomes a better option down the road, we will switch to it then. Currently, the OpenAI enterprise API for GPT-4 is not production ready, so we cannot do development on it using Northern Light (or client) content. And pricing is an issue with it: GPT-4’s announced API pricing is 15-times higher than GPT 3.5 Turbo. The recent history of OpenAI is that they price new models very high and then reduce the price after another model they like better comes out, so this may change in the future.
This is an exciting time in search and knowledge management for market and competitive intelligence. Interestingly, many of the hard parts have nothing to do with large language model technology. Rather, the ability to aggregate excellent purpose-made content, extract meaningful text, adroitly select text to send to the technical process, and present the response in a useful way may count for more than the model does because the models themselves may become a commodity.
Fortunately, Northern Light is expert at the very things that count most. And that won’t change next month… or ever.