Now, Enterprise Search Speaks our Language
ChatGPT Shines the Spotlight on Search
I’ve been in the enterprise search business for seven years, and some of my colleagues have been doing search for three decades. In that time, a lot has changed: search went from an arcane corner of academia (information retrieval) to the center of our lives. Card catalogs were replaced with computer terminals using search. Google replaced Yahoo with search. Netflix replaced Blockbuster, in part with search. Amazon dominated online retail because search meant they could carry an unparalleled product selection. TikTok replaced Facebook, in large part, with search (a recommendation algorithm is basically search, but instead of a written query, a person’s history is used to find content they’ll like).
Today, almost everything we do online starts with or is fed by search.
Yet, we usually don’t pay much attention to search; after all, it’s supposed to be invisible. It’s a means to an end, and most of the time, it works well enough. We only notice it when it doesn’t. And most of the time that it doesn’t work is when we are at work trying to do work. Compared to most consumer applications, searching within an enterprise is harder (many disparate systems and formats, security, sparse and poor-quality metadata, etc.), and enterprises can’t afford to invest the time and money that consumer companies do. So search is often overlooked.
Until now.
ChatGPT has brought search into the spotlight. ChatGPT isn’t search – but it does make it easier to interact with search results in easy-to-read natural language. More importantly, the technology that powers ChatGPT (large language models, or LLMs) is vastly improving the quality of search results.
How Large Language Models (LLMs) Help
Large Language Models are a specific kind of AI – deep neural networks that, using lots of math and probabilities, learn the semantic patterns of language. That doesn’t mean they understand its meaning, but it’s often close enough because the patterns are closely associated with meaning. This has turned traditional computing on its head – in the past, humans had to learn a language that the computer understood. But with LLMs, computers have learned how to speak with us – they now speak our language.
In most cases, a raw LLM is of limited use; it becomes powerful with additional training (through fine-tuning) to steer those language skills toward a specific task. That’s what OpenAI did with ChatGPT. They trained an LLM (GPT-3.5) to carry on a conversation by following instructions and responding to consecutive prompts, remembering the context created throughout the conversation. The result was the fastest-growing consumer application of all time.
Sinequa’s Use of LLMs
Sinequa has been hard at work adapting LLMs to the complex problem of searching for content within the enterprise since the advent of LLMs in late 2019. LLMs are complex, costly to train, and tedious to fine-tune, so adapting them for a specific task is difficult. But the work paid off, and last year we introduced Neural Search.
We developed four models, adapted to perform tasks specific to search:
- The meaning encoder, which maps a snippet of text to a vector representing its meaning
- The answer ranker, which orders a set of answers from best to worst for a given query
- The answer finder, which identifies factual answers within a snippet of text
- The query generator, which creates alternate formulations of a query with similar meaning, helps find relevant content with different words or phrasing.
These models are available in multiple languages and sizes to balance accuracy with the minimum computation cost. We’ve optimized these models to run efficiently and cost-effectively at enterprise scale, bringing LLMs to enterprise search for the first time, with no setup or configuration required, just turn it on and go!
Neural Search: Focused, Fast, and Forgiving
Incorporating these models into our platform has dramatically improved the search experience. Sinequa’s Neural Search is more focused, fast, and forgiving than ever before.
Focused because Neural Search works on smaller snippets of text (usually a sentence or two) rather than an entire document. Most of the information needed in our daily work does not require an entire 30-page document but rather a small snippet of information within that document. Searching for and displaying the most relevant snippets brings much more focused results.
Fast because working with snippets rather than documents means immediate answers. A list of highly-relevant snippets can be scanned very quickly, and often the top snippets from Neural Search have the answer without ever having to open or read the document that contains the answer. Snippets also quickly answer complex topics, where the needed information doesn’t come from a single place. By displaying the most relevant answers from multiple documents, it’s quick to synthesize the needed information.
Forgiving because Neural Search works with natural language. Search no longer relies on keywords, synonyms, and the employee’s ability to write a query that the computer can understand. Now, natural language, with all of its complexities and ambiguities, is the best way to search because Neural Search uses the context and nuance of the query (plus the context and nuance of the content!) to find the best match based on meaning.
Conclusion
Search is at the core of everything we do, but search in enterprise applications (and corporate intranets, file shares, etc.) has lagged far behind the quality and efficacy that we’ve come to expect in our daily interactions with the likes of Google, Amazon, and Netflix. Neural Search changes that. Neural Search brings the power of LLMs to quickly find the right information within the enterprise, replacing clunky keyword guessing with a powerful tool that works the way we do. Finally, enterprise search speaks our language.