[Taken from the Model Card]
SigLIP is CLIP, a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TL;DR of SigLIP by one of the authors can be found here.
This repository shows how you can utilize SigLIP for search in different modalities.
📚 It contains:
- A notebook on how to create an embedding index using SigLIP with Hugging Face Transformers and FAISS,
- An image similarity search application that uses the created index, (link to 🤗Space)
- An application that compares SigLIP and CLIP (link to the 🤗Space)
- An application that compares SigLIP against NLLB-CLIP and CLIP-ViT for multilingual inference. (link to the 🤗Space)
- Another notebook to index text embeddings the 🤗datasets-FAISS integration.
![Screenshot 2024-01-08 at 22 23 44](https://private-user-images.githubusercontent.com/53175384/295014767-c621f100-2f29-407e-a233-1f74f4919131.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjE4MjMxOTYsIm5iZiI6MTcyMTgyMjg5NiwicGF0aCI6Ii81MzE3NTM4NC8yOTUwMTQ3NjctYzYyMWYxMDAtMmYyOS00MDdlLWEyMzMtMWY3NGY0OTE5MTMxLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MjQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzI0VDEyMDgxNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWUwMTA1MjE3NDczY2Y1ZDVkOGQ5YjhhYTYxZTRjMDNiNDcyMzFlNzkyMDEzYzRlYWEzM2U1ZmI5ZmJmMDhmMTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.spy7q8Gnin807cb5yamjvWg0LiwIMp5-zfUFFOzoMBU)
You can use the raw SigLIP for tasks like zero-shot image classification and image-text retrieval. See the SigLIP checkpoints on Hugging Face Hub to look for other versions on a task that interests you.
Here is how to use this model to perform zero-shot image classification:
from PIL import Image
import requests
from transformers import AutoProcessor, AutoModel
import torch
model = AutoModel.from_pretrained("google/siglip-base-patch16-256-i18n")
processor = AutoProcessor.from_pretrained("google/siglip-base-patch16-256-i18n")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["a photo of 2 cats", "a photo of 2 dogs"]
inputs = processor(text=texts, images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image) # these are the probabilities
print(f"{probs[0][0]:.1%} that image 0 is '{texts[0]}'")
Alternatively, one can leverage the pipeline API which abstracts away the complexity for the user:
from transformers import pipeline
from PIL import Image
import requests
# load pipe
image_classifier = pipeline(task="zero-shot-image-classification", model="google/siglip-base-patch16-256-i18n")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
outputs = image_classifier(image, candidate_labels=["2 cats", "a plane", "a remote"])
outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs]
print(outputs)
For more code examples, we refer to the documentation.
Citation
@misc{zhai2023sigmoid,
title={Sigmoid Loss for Language Image Pre-Training},
author={Xiaohua Zhai and Basil Mustafa and Alexander Kolesnikov and Lucas Beyer},
year={2023},
eprint={2303.15343},
archivePrefix={arXiv},
primaryClass={cs.CV}
}