Why Retrieval-Augmented Generation Is Still Relevant in the Era of Long-Context Language Models
In this article we will explore why 128K tokens (and more) models can’t fully replace using RAG. Jérôme DIAZ · Follow Published in Towards Data Science · 7 min read · 7 hours ago — We’ll start with a brief reminder of the problems that can be solved with RAG, before looking at the improvements in LLMs and their impact on the need to use RAG. Illustration by the author. Let’s start by a bit