/colpali-cookbooks

Recipes for learning, fine-tuning, and adapting ColPali to your multimodal RAG use cases. 👨🏻‍🍳

MIT LicenseMIT

ColPali Cookbooks 👀

GitHub arXiv Hugging Face X

[ColPali Engine] [ViDoRe Benchmark]

Introduction

With our new model ColPali, we propose to leverage VLMs to construct efficient multi-vector embeddings in the visual space for document retrieval. By feeding the ViT output patches from PaliGemma-3B to a linear projection, we create a multi-vector representation of documents. We train the model to maximize the similarity between these document embeddings and the query embeddings, following the ColBERT method.

Using ColPali removes the need for potentially complex and brittle layout recognition and OCR pipelines with a single model that can take into account both the textual and visual content (layout, charts, ...) of a document.

This repository contains notebooks for learning, fine-tuning, and adapting ColPali to your multimodal RAG use cases.

Notebook Description
Interpretability ColPali: Generate your own similarity maps Generate your own similarity maps to interpret ColPali's predictions.
Fine-tuning Fine-tune ColPali Fine-tune ColPali using LoRA and optional 4bit/8bit quantization.

Instructions

Open with Colab

The easiest way to use the notebooks is to open them from the examples directory and click on the Colab button below:

Colab

This will open the notebook in Google Colab, where you can run the code and experiment with the models.

Run locally

If you prefer to run the notebooks locally, you can clone the repository and open the notebooks in Jupyter Notebook or in your IDE.

Citation

ColPali: Efficient Document Retrieval with Vision Language Models

Authors: Manuel Faysse*, Hugues Sibille*, Tony Wu*, Bilel Omrani, Gautier Viaud, Céline Hudelot, Pierre Colombo (* denotes equal contribution)

@misc{faysse2024colpaliefficientdocumentretrieval,
      title={ColPali: Efficient Document Retrieval with Vision Language Models}, 
      author={Manuel Faysse and Hugues Sibille and Tony Wu and Bilel Omrani and Gautier Viaud and Céline Hudelot and Pierre Colombo},
      year={2024},
      eprint={2407.01449},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2407.01449}, 
}