AI is upending search as we know it

We want to hear from you! Take our quick AI survey and share your insights on the current state of AI, how you’re implementing it, and what you expect to see in the future. Learn More


Generative AI was always going to upend search. It’s a technology that can find answers to virtually any question posed to it. And in the process of changing the world of search, AI developers latched on to something else that would meld search and generative AI even more.

Generative AI has changed three essential aspects of search: how people ask and look for information, how to get data for answers and how companies can start offering this information to customers.

For years, Google has dominated Search. As the dominant search engine (with almost 82% of search traffic), it dictated how users and customers look for information and how brands show up in results. Companies had to lean into search engine optimization (SEO) strategies, and people constructed queries into a keyword salad. It didn’t always yield good results, but it was passable, and everyone learned to translate their questions into keywords and interpret which of the lists of websites on the results page might have what they were looking for. 

Large language models (LLM) changed that, especially when deployed in chatbots like OpenAI’s ChatGPT. People could suddenly ask any question they wanted (within guardrail reason) and get an answer right back. There’s no need to click through a series of websites; it’s all explained to you. 


Countdown to VB Transform 2024

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


“If you had an AI that could understand the search results and talk to you in an obvious manner to explain the results to you, that is a monumental improvement over trying to go through five million pages of results,” said Christian Ward, chief data officer at data management platform Yext, in an interview with VentureBeat. 

Asking questions instead of keywords

Generative AI now encourages people to ask actual questions in natural language instead of focusing on keywords. It’s letting people see the information they need rather than having to hunt for a good enough website. 

AI company Perplexity took advantage of this shift in search methods and positioned itself as more of a search engine than a chatbot that can generate code or art. The company partnered with data providers like Yelp and Wolfram Alpha to gather data better. The strategy has worked. VentureBeat reported that Perplexity’s platform has grown in traffic referrals.

Even Google realized it could use its vast access to data to, in its words during Google I/O, do the Googling for you. Besides integrating Google Search into its Gemini chatbot, it introduced an AI Overview that summarizes query results. 

But for enterprises, it isn’t just about learning how to use natural language when asking questions; it’s also about being able to answer only based on their documents.

Retrieval augmented generation (RAG) is becoming a big trend in the generative AI space as model providers look for ways to offer additional services to enterprises. RAG lets companies “ground” AI models in their own data, ensuring that results come from documents within the company. 

“LLMs have gotten quite good, so all of a sudden, you can do a lot of things. But really, I’ve seen a ton of interest in cases like customer support and other internal use cases because companies are very comfortable with the risks involved,” said Ben Flast, director of product management at MongoDB. 

He added the value of RAG architecture lies in its ability to refer to actual documents, making it easy for users to get close to the answers they’re looking for. 

Hyperscalers like Amazon Web Services (AWS) and Microsoft have begun offering RAG-specific services to clients, but the RAG ecosystem is growing. Companies like Elastic, Pinecone and Qdrant provide vector databases to map knowledge graphs to RAG frameworks. Flast also pointed out that monitoring tools for RAG systems are still in their infancy. 

Enterprises are embracing RAG more and more, but right now, many of its use cases remain internal as these are still prone to hallucination. Providers encourage enterprises to evaluate RAG models first. AWS, which made RAG a big part of its generative AI strategy with its Amazon Q product, came up with a new method to test the accuracy of RAG results. 

Company-specific search platforms could be the future

As RAG grows, companies could face another change in search. The many avenues emerging to post a search query keep increasing, so enterprises need to figure out if they want to offer the data themselves or continue to passively offer information to an information aggregator like Google. This would let them control how they can present their information to customers. 

Yext’s Ward said there might come a time when every company builds their own search platform, one powered by RAG and generative AI, so that customers can find the best information grounded in the brands’ data. Enterprises that ground search with their own data can give customers answers specific to their products and services. For example, someone wants to know how many colors pants from Everlane come in. Instead of going to a big search engine like Google, they can go to the Everlane website and ask its platform the question.

“It’s not the end of search, but there may be a decentralization of search for certain search queries. If I want to know the closest pizza shop, that’s what Google is for, but if I want to understand allergen info for the shop, I need to ask the shop itself,” he said. 

The upcoming VB Transform 2024 conference will further explore these themes with expert panels discussing the cross-functional future of AI, featuring leaders. We hope to see you there!