This repository contains code to run faster sentence-transformers
using tools like quantization and ONNX
. Just run your model much faster, while a lot of memory. There is not much to it!
pip install fast-sentence-transformers
Or for GPU support.
pip install fast-sentence-transformers[gpu]
from fast_sentence_transformers import FastSentenceTransformer as SentenceTransformer
# use any sentence-transformer
encoder = SentenceTransformer("all-MiniLM-L6-v2", device="cpu", quantize=True)
encoder.encode("Hello hello, hey, hello hello")
encoder.encode(["Life is too short to eat bad food!"] * 2)
Indicative benchmark for CPU usage with smallest and largest model on sentence-transformers
. Note, ONNX doesn't have GPU support for quantization yet.
model | Type | default | ONNX | ONNX+quantized | ONNX+GPU |
---|---|---|---|---|---|
paraphrase-albert-small-v2 | memory | 1x | 1x | 1x | 1x |
speed | 1x | 2x | 5x | 20x | |
paraphrase-multilingual-mpnet-base-v2 | memory | 1x | 1x | 4x | 4x |
speed | 1x | 2x | 5x | 20x |
This package heavily leans on sentence-transformers
and txtai
.