aurelio-labs/semantic-router

Adding an option to specify a local embedding model

Closed this issue · 1 comments

Hello everyone,

I am working on a project where I am running the semantic router on an embedded hardware. I have all my models, LLMs and encoding models, residing locally and I am already using the Huggingface sentence transformers encoder with Chroma vector database. However, it doesn't seem like the semantic router library allows for specifying a local encoding model when creating the encoder. For example, the HuggingFaceEncoder specifies the "sentence-transformers/all-MiniLM-L6-v2" name and it automatically tries to download the encoding model into cache once the object is created. I would like to be able to specify the path to a local instance of the all-MiniLM-L6-v2 model instead, so that it won't try to download anything. I used "semantic-router[local]" to install the semantic-router library.

Thanks!

Never mind. I figured I can just pass the local path as a name like HuggingFaceEncoder(name=local_path). Sorry for the inconvenience.