LLM plugin for embedding models using sentence-transformers
Further reading:
- LLM now provides tools for working with embeddings
- Embedding paragraphs from my blog with E5-large-v2
Install this plugin in the same environment as LLM.
llm install llm-sentence-transformers
After installing the plugin you need to register one or more models in order to use it. The all-MiniLM-L6-v2 model is registered by default, and will be downloaded the first time you use it.
You can try that model out like this:
llm embed -m mini-l6 -c 'hello'
This will return a JSON array of floating point numbers.
You can add more models using the llm sentence-transformers register
command. Here is a list of available models.
Two good models to start experimenting with are all-MiniLM-L12-v2
- a 120MB download - and all-mpnet-base-v2
, which is 420MB.
To install that all-mpnet-base-v2 model, run:
llm sentence-transformers register \
all-mpnet-base-v2 \
--alias mpnet
The --alias
is optional, but can be used to configure one or more shorter aliases for the model.
You can run llm aliases
to confirm which aliases you have configured, and llm aliases set to configure further aliases.
Once you have installed an embedding model you can use it like this:
llm embed -m sentence-transformers/all-mpnet-base-v2 \
-c "Hello world"
Or use its alias:
llm embed -m mpnet -c "Hello world"
Embeddings are more useful if you store them in a database - see the LLM documentation for instructions on doing that.
Be sure to review the documentation for the model you are using. Many models will silently truncate content beyond a certain number of tokens. all-mpnet-base-v2
says that "input text longer than 384 word pieces is truncated", for example.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-sentence-transformers
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest