Run evals against prompts using LLM
Very early alpha: everything is likely to change.
Install this plugin in the same environment as LLM.
llm install llm-evals-plugin
See this issue comment.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-evals-plugin
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
pytest