Multimodal Search Engine Agents Powered by BLIP-2 and Gemini | Towards Data Science

This post was co-authored with Rafael Guedes.

Introduction

Traditional models can only process a single type of data, such as text, images, or tabular data. Multimodality is a trending concept in the AI research community, referring to a model’s ability to learn from multiple types of data simultaneously. This new technology (not really new, but significantly improved in the last few months) has numerous potential applications that will transform the user experience of many products.

One good example would be the new way search engines will work in the future, where users can input queries using a combination of modalities, such as text, images, audio, etc. Another example could be improving AI-powered customer support systems for voice and text inputs. In e-commerce, they are enhancing product discovery by allowing users to search using images and text. We will use the latter as our case study in this article.

The frontier AI research labs are shipping several models that support multiple modalities every month. CLIP and DALL-E by OpenAI and BLIP-2 by Salesforce combine image and text. ImageBind by Meta expanded the multiple modality concept to six modalities (text, audio, depth, thermal, image, and inertial measurement units).

In this article, we will explore BLIP-2 by explaining its architecture, the way its loss function works, and its training process. We also present a practical use case that combines BLIP-2 and Gemini to create a multimodal fashion search agent that can assist customers in finding the best outfit based on either text or text and image prompts.

Figure 1: Multimodal Search Agent (image by author with Gemini)

As always, the code is available on our GitHub.

BLIP-2: a multimodal model

BLIP-2 (Bootstrapped Language-Image Pre-Training) [1] is a vision-language model designed to solve tasks such as visual question answering or multimodal reasoning based on inputs of both modalities: image and text. As we will see below, this model was developed to address two main challenges in the vision-language domain:

  1. Reduce computational cost using frozen pre-trained visual encoders and LLMs, drastically reducing the training resources needed compared to a joint training of vision and language networks.
  2. Improving visual-language alignment by introducing Q-Former. Q-Former brings the visual and textual embeddings closer, leading to improved reasoning task performance and the ability to perform multimodal retrieval.

Architecture

The architecture of BLIP-2 follows a modular design that integrates three modules:

  1. Visual Encoder is a frozen visual model, such as ViT, that extracts visual embeddings from the input images (which are then used in downstream tasks).
  2. Querying Transformer (Q-Former) is the key to this architecture. It consists of a trainable lightweight transformer that acts as an intermediate layer between the visual and language models. It is responsible for generating contextualized queries from the visual embeddings so that they can be processed effectively by the language model.
  3. LLM is a frozen pre-trained LLM that processes refined visual embeddings to generate textual descriptions or answers.
Figure 2: BLIP-2 architecture (image by author)

Loss Functions

BLIP-2 has three loss functions to train the Q-Former module:

  • Image-text contrastive loss [2] enforces the alignment between visual and text embeddings by maximizing the similarity of paired image-text representations while pushing apart dissimilar pairs.
  • Image-text matching loss [3] is a binary classification loss that aims to make the model learn fine-grained alignments by predicting whether a text description matches the image (positive, i.e., target=1) or not (negative, i.e., target=0).
  • Image-grounded text generation loss [4] is a cross-entropy loss used in LLMs to predict the probability of the next token in the sequence. The Q-Former architecture does not allow interactions between the image embeddings and the text tokens; therefore, the text must be generated based solely on the visual information, forcing the model to extract relevant visual features.

For both image-text contrastive loss and image-text matching loss, the authors used in-batch negative sampling, which means that if we have a batch size of 512, each image-text pair has one positive sample and 511 negative samples. This approach increases efficiency since negative samples are taken from the batch, and there is no need to search the entire dataset. It also provides a more diverse set of comparisons, leading to a better gradient estimation and faster convergence.

Figure 3: Training losses explained (image by author)

Training Process

The training of BLIP-2 consists of two stages:

Stage 1 – Bootstrapping visual-language representation:

  1. The model receives images as input that are converted to an embedding using the frozen visual encoder.
  2. Together with these images, the model receives their text descriptions, which are also converted into embedding.
  3. The Q-Former is trained using image-text contrastive loss, ensuring that the visual embeddings align closely with their corresponding textual embeddings and get further away from the non-matching text descriptions. At the same time, the image-text matching loss helps the model develop fine-grained representations by learning to classify whether a given text correctly describes the image or not.
Figure 4: Stage 1 training process (image by author)

Stage 2 – Bootstrapping vision-to-language generation:

  1. The pre-trained language model is integrated into the architecture to generate text based on the previously learned representations.
  2. The focus shifts from alignment to text generation by using the image-grounded text generation loss which improves the model capabilities of reasoning and text generation.
Figure 5: Stage 2 training process (image by the author)

Creating a Multimodal Fashion Search Agent using BLIP-2 and Gemini

In this section, we will leverage the multimodal capabilities of BLIP-2 to build a fashion assistant search agent that can receive input text and/or images and return recommendations. For the conversation capabilities of the agent, we will use Gemini 1.5 Pro hosted in Vertex AI, and for the interface, we will build a Streamlit app.

The fashion dataset used in this use case is licensed under the MIT license and can be accessed through the following link: Fashion Product Images Dataset. It consists of more than 44k images of fashion products.

The first step to make this possible is to set up a Vector DB. This enables the agent to perform a vectorized search based on the image embeddings of the items available in the store and the text or image embeddings from the input. We use docker and docker-compose to help us set up the environment:

  • Docker-Compose with Postgres (the database) and the PGVector extension that allows vectorized search.
services:
  postgres:
    container_name: container-pg
    image: ankane/pgvector
    hostname: localhost
    ports:
      - "5432:5432"
    env_file:
      - ./env/postgres.env
    volumes:
      - postgres-data:/var/lib/postgresql/data
    restart: unless-stopped

  pgadmin:
    container_name: container-pgadmin
    image: dpage/pgadmin4
    depends_on:
      - postgres
    ports:
      - "5050:80"
    env_file:
      - ./env/pgadmin.env
    restart: unless-stopped

volumes:
  postgres-data:
  • Postgres env file with the variables to log into the database.
POSTGRES_DB=postgres
POSTGRES_USER=admin
POSTGRES_PASSWORD=root
  • Pgadmin env file with the variables to log into the UI for manual querying the database (optional).
[email protected] 
PGADMIN_DEFAULT_PASSWORD=root
  • Connection env file with all the components to use to connect to PGVector using Langchain.
DRIVER=psycopg
HOST=localhost
PORT=5432
DATABASE=postgres
USERNAME=admin
PASSWORD=root

Once the Vector DB is set up and running (docker-compose up -d), it is time to create the agents and tools to perform a multimodal search. We build two agents to solve this use case: one to understand what the user is requesting and another one to provide the recommendation:

  • The classifier is responsible for receiving the input message from the customer and extracting which category of clothes the user is looking for, for example, t-shirts, pants, shoes, jerseys, or shirts. It will also return the number of items the customer wants so that we can retrieve the exact number from the Vector DB.
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_google_vertexai import ChatVertexAI
from pydantic import BaseModel, Field

class ClassifierOutput(BaseModel):
    """
    Data structure for the model's output.
    """

    category: list = Field(
        description="A list of clothes category to search for ('t-shirt', 'pants', 'shoes', 'jersey', 'shirt')."
    )
    number_of_items: int = Field(description="The number of items we should retrieve.")

class Classifier:
    """
    Classifier class for classification of input text.
    """

    def __init__(self, model: ChatVertexAI) -> None:
        """
        Initialize the Chain class by creating the chain.
        Args:
            model (ChatVertexAI): The LLM model.
        """
        super().__init__()

        parser = PydanticOutputParser(pydantic_object=ClassifierOutput)

        text_prompt = """
        You are a fashion assistant expert on understanding what a customer needs and on extracting the category or categories of clothes a customer wants from the given text.
        Text:
        {text}

        Instructions:
        1. Read carefully the text.
        2. Extract the category or categories of clothes the customer is looking for, it can be:
            - t-shirt if the custimer is looking for a t-shirt.
            - pants if the customer is looking for pants.
            - jacket if the customer is looking for a jacket.
            - shoes if the customer is looking for shoes.
            - jersey if the customer is looking for a jersey.
            - shirt if the customer is looking for a shirt.
        3. If the customer is looking for multiple items of the same category, return the number of items we should retrieve. If not specfied but the user asked for more than 1, return 2.
        4. If the customer is looking for multiple category, the number of items should be 1.
        5. Return a valid JSON with the categories found, the key must be 'category' and the value must be a list with the categories found and 'number_of_items' with the number of items we should retrieve.

        Provide the output as a valid JSON object without any additional formatting, such as backticks or extra text. Ensure the JSON is correctly structured according to the schema provided below.
        {format_instructions}

        Answer:
        """

        prompt = PromptTemplate.from_template(
            text_prompt, partial_variables={"format_instructions": parser.get_format_instructions()}
        )
        self.chain = prompt | model | parser

    def classify(self, text: str) -> ClassifierOutput:
        """
        Get the category from the model based on the text context.
        Args:
            text (str): user message.
        Returns:
            ClassifierOutput: The model's answer.
        """
        try:
            return self.chain.invoke({"text": text})
        except Exception as e:
            raise RuntimeError(f"Error invoking the chain: {e}")
  • The assistant is responsible for answering with a personalized recommendation retrieved from the Vector DB. In this case, we are also leveraging the multimodal capabilities of Gemini to analyze the images retrieved and produce a better answer.
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_google_vertexai import ChatVertexAI
from pydantic import BaseModel, Field

class AssistantOutput(BaseModel):
    """
    Data structure for the model's output.
    """

    answer: str = Field(description="A string with the fashion advice for the customer.")

class Assistant:
    """
    Assitant class for providing fashion advice.
    """

    def __init__(self, model: ChatVertexAI) -> None:
        """
        Initialize the Chain class by creating the chain.
        Args:
            model (ChatVertexAI): The LLM model.
        """
        super().__init__()

        parser = PydanticOutputParser(pydantic_object=AssistantOutput)

        text_prompt = """
        You work for a fashion store and you are a fashion assistant expert on understanding what a customer needs.
        Based on the items that are available in the store and the customer message below, provide a fashion advice for the customer.
        Number of items: {number_of_items}
        
        Images of items:
        {items}

        Customer message:
        {customer_message}

        Instructions:
        1. Check carefully the images provided.
        2. Read carefully the customer needs.
        3. Provide a fashion advice for the customer based on the items and customer message.
        4. Return a valid JSON with the advice, the key must be 'answer' and the value must be a string with your advice.

        Provide the output as a valid JSON object without any additional formatting, such as backticks or extra text. Ensure the JSON is correctly structured according to the schema provided below.
        {format_instructions}

        Answer:
        """

        prompt = PromptTemplate.from_template(
            text_prompt, partial_variables={"format_instructions": parser.get_format_instructions()}
        )
        self.chain = prompt | model | parser

    def get_advice(self, text: str, items: list, number_of_items: int) -> AssistantOutput:
        """
        Get advice from the model based on the text and items context.
        Args:
            text (str): user message.
            items (list): items found for the customer.
            number_of_items (int): number of items to be retrieved.
        Returns:
            AssistantOutput: The model's answer.
        """
        try:
            return self.chain.invoke({"customer_message": text, "items": items, "number_of_items": number_of_items})
        except Exception as e:
            raise RuntimeError(f"Error invoking the chain: {e}")

In terms of tools, we define one based on BLIP-2. It consists of a function that receives a text or image as input and returns normalized embeddings. Depending on the input, the embeddings are produced using the text embedding model or the image embedding model of BLIP-2.

from typing import Optional

import numpy as np
import torch
import torch.nn.functional as F
from PIL import Image
from PIL.JpegImagePlugin import JpegImageFile
from transformers import AutoProcessor, Blip2TextModelWithProjection, Blip2VisionModelWithProjection

PROCESSOR = AutoProcessor.from_pretrained("Salesforce/blip2-itm-vit-g")
TEXT_MODEL = Blip2TextModelWithProjection.from_pretrained("Salesforce/blip2-itm-vit-g", torch_dtype=torch.float32).to(
    "cpu"
)
IMAGE_MODEL = Blip2VisionModelWithProjection.from_pretrained(
    "Salesforce/blip2-itm-vit-g", torch_dtype=torch.float32
).to("cpu")

def generate_embeddings(text: Optional[str] = None, image: Optional[JpegImageFile] = None) -> np.ndarray:
    """
    Generate embeddings from text or image using the Blip2 model.
    Args:
        text (Optional[str]): customer input text
        image (Optional[Image]): customer input image
    Returns:
        np.ndarray: embedding vector
    """
    if text:
        inputs = PROCESSOR(text=text, return_tensors="pt").to("cpu")
        outputs = TEXT_MODEL(**inputs)
        embedding = F.normalize(outputs.text_embeds, p=2, dim=1)[:, 0, :].detach().numpy().flatten()
    else:
        inputs = PROCESSOR(images=image, return_tensors="pt").to("cpu", torch.float16)
        outputs = IMAGE_MODEL(**inputs)
        embedding = F.normalize(outputs.image_embeds, p=2, dim=1).mean(dim=1).detach().numpy().flatten()

    return embedding

Note that we create the connection to PGVector with a different embedding model because it is mandatory, although it will not be used since we will store the embeddings produced by BLIP-2 directly.

In the loop below, we iterate over all categories of clothes, load the images, and create and append the embeddings to be stored in the vector db into a list. Also, we store the path to the image as text so that we can render it in our Streamlit app. Finally, we store the category to filter the results based on the category predicted by the classifier agent.

import glob
import os

from dotenv import load_dotenv
from langchain_huggingface.embeddings import HuggingFaceEmbeddings
from langchain_postgres.vectorstores import PGVector
from PIL import Image

from blip2 import generate_embeddings

load_dotenv("env/connection.env")

CONNECTION_STRING = PGVector.connection_string_from_db_params(
    driver=os.getenv("DRIVER"),
    host=os.getenv("HOST"),
    port=os.getenv("PORT"),
    database=os.getenv("DATABASE"),
    user=os.getenv("USERNAME"),
    password=os.getenv("PASSWORD"),
)

vector_db = PGVector(
    embeddings=HuggingFaceEmbeddings(model_name="nomic-ai/modernbert-embed-base"),  # does not matter for our use case
    collection_name="fashion",
    connection=CONNECTION_STRING,
    use_jsonb=True,
)

if __name__ == "__main__":

    # generate image embeddings
    # save path to image in text
    # save category in metadata
    texts = []
    embeddings = []
    metadatas = []

    for category in glob.glob("images/*"):
        cat = category.split("/")[-1]
        for img in glob.glob(f"{category}/*"):
            texts.append(img)
            embeddings.append(generate_embeddings(image=Image.open(img)).tolist())
            metadatas.append({"category": cat})

    vector_db.add_embeddings(texts, embeddings, metadatas)

We can now build our Streamlit app to chat with our assistant and ask for recommendations. The chat starts with the agent asking how it can help and providing a box for the customer to write a message and/or to upload a file.

Once the customer replies, the workflow is the following:

  • The classifier agent identifies which categories of clothes the customer is looking for and how many units they want.
  • If the customer uploads a file, this file is going to be converted into an embedding, and we will look for similar items in the vector db, conditioned by the category of clothes the customer wants and the number of units.
  • The items retrieved and the customer’s input message are then sent to the assistant agent to produce the recommendation message that is rendered together with the images retrieved.
  • If the customer did not upload a file, the process is the same, but instead of generating image embeddings for retrieval, we create text embeddings.
import os

import streamlit as st
from dotenv import load_dotenv
from langchain_google_vertexai import ChatVertexAI
from langchain_huggingface.embeddings import HuggingFaceEmbeddings
from langchain_postgres.vectorstores import PGVector
from PIL import Image

import utils
from assistant import Assistant
from blip2 import generate_embeddings
from classifier import Classifier

load_dotenv("env/connection.env")
load_dotenv("env/llm.env")

CONNECTION_STRING = PGVector.connection_string_from_db_params(
    driver=os.getenv("DRIVER"),
    host=os.getenv("HOST"),
    port=os.getenv("PORT"),
    database=os.getenv("DATABASE"),
    user=os.getenv("USERNAME"),
    password=os.getenv("PASSWORD"),
)

vector_db = PGVector(
    embeddings=HuggingFaceEmbeddings(model_name="nomic-ai/modernbert-embed-base"),  # does not matter for our use case
    collection_name="fashion",
    connection=CONNECTION_STRING,
    use_jsonb=True,
)

model = ChatVertexAI(model_name=os.getenv("MODEL_NAME"), project=os.getenv("PROJECT_ID"), temperarture=0.0)
classifier = Classifier(model)
assistant = Assistant(model)

st.title("Welcome to ZAAI's Fashion Assistant")

user_input = st.text_input("Hi, I'm ZAAI's Fashion Assistant. How can I help you today?")

uploaded_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"])

if st.button("Submit"):

    # understand what the user is asking for
    classification = classifier.classify(user_input)

    if uploaded_file:

        image = Image.open(uploaded_file)
        image.save("input_image.jpg")
        embedding = generate_embeddings(image=image)

    else:

        # create text embeddings in case the user does not upload an image
        embedding = generate_embeddings(text=user_input)

    # create a list of items to be retrieved and the path
    retrieved_items = []
    retrieved_items_path = []
    for item in classification.category:
        clothes = vector_db.similarity_search_by_vector(
            embedding, k=classification.number_of_items, filter={"category": {"$in": [item]}}
        )
        for clothe in clothes:
            retrieved_items.append({"bytesBase64Encoded": utils.encode_image_to_base64(clothe.page_content)})
            retrieved_items_path.append(clothe.page_content)

    # get assistant's recommendation
    assistant_output = assistant.get_advice(user_input, retrieved_items, len(retrieved_items))
    st.write(assistant_output.answer)

    cols = st.columns(len(retrieved_items)+1)
    for col, retrieved_item in zip(cols, ["input_image.jpg"]+retrieved_items_path):
        col.image(retrieved_item)

    user_input = st.text_input("")

else:
    st.warning("Please provide text.")

Both examples can be seen below:

Figure 6 shows an example where the customer uploaded an image of a red t-shirt and asked the agent to complete the outfit.

Figure 6: Example of text and image input (image by author)

Figure 7 shows a more straightforward example where the customer asked the agent to show them black t-shirts.

Figure 7: Example of text input (image by author)

Conclusion

Multimodal AI is no longer just a research topic. It is being used in the industry to reshape the way customers interact with company catalogs. In this article, we explored how multimodal models like BLIP-2 and Gemini can be combined to address real-world problems and provide a more personalized experience to customers in a scalable way.

We explored the architecture of BLIP-2 in depth, demonstrating how it bridges the gap between text and image modalities. To extend its capabilities, we developed a system of agents, each specializing in different tasks. This system integrates an LLM (Gemini) and a vector database, enabling retrieval of the product catalog using text and image embeddings. We also leveraged Gemini’s multimodal reasoning to improve the sales assistant agent’s responses to be more human-like.

With tools like BLIP-2, Gemini, and PG Vector, the future of multimodal search and retrieval is already happening, and the search engines of the future will look very different from the ones we use today.

About me

Serial entrepreneur and leader in the AI space. I develop AI products for businesses and invest in AI-focused startups.

Founder @ ZAAI | LinkedIn | X/Twitter

References

[1] Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. 2023. BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models. arXiv:2301.12597

[2] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan. 2020. Supervised Contrastive Learning. arXiv:2004.11362

[3] Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi. 2021. Align before Fuse: Vision and Language Representation Learning with Momentum Distillation. arXiv:2107.07651

[4] Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon. 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. arXiv:1905.03197