An LLM can handle general routing. Semantic search can handle private data better. Which one would you pick?
A single prompt cannot handle everything, and a single data source may not be suitable for all the data.
Here’s something you often see in production but not in demos:
You need more than one data source to retrieve information. More than one vector store, graph DB, or even an SQL database. And you need different prompts to handle different tasks, too.
If so, we have a problem. Given unstructured, often ambiguous, and poorly formatted user input, how do we decide which database to retrieve data from?
If, for some reason, you still think it’s too easy, here’s an example.
Suppose you have a tour-guiding chatbot, and one traveler asks for an optimal travel schedule between five places. Letting the LLM answer may hallucinate, as LLMs aren’t good with location-based calculations.
Instead, if you store this information in a graph database, the LLM may generate a query to fetch the shortest travel path between the points…