Anush008/fastembed-rs

How to load a local embedding model.

RWayne93 opened this issue · 4 comments

I don’t see in the documentation how I can instantiate an embedding model from my local file system give the file path to the downloaded embedding model.

I’m assuming this reaches out to hunggingface and stores the embedding model in local cache somewhere.

Currently, the specified model is downloaded and cached locally(You can specify the path in InitOptions).
With #40, you should be able to use any local model files.

Thanks for the fast reply. I see other sentence transformer models on the supported list so if I wanted to bring my own model like all-mpnet-base-v2 when it’s merged without issues?

You could also do a PR to add the model to the library, if it has an ONNX source on HuggingFace.

For example: #36