truera/trulens

[FEAT] Can I use a full local LLM instead of openai?

Lauorie opened this issue · 1 comments

Feature Description
Describe the feature you are requesting. Try to reference existing implementations/papers/examples when possible.
I'd like to evaluate my RAG system using trulens with local LLM and embedding model, but I can not find any instructions to follow.
Reason
Why is this needed? Is there a workaround or another library/product you are using today to fulfill this gap?
My RAG system is deploying in an environment without networks, so I can't use openai.
Importance of Feature
What value does this feature unlock for you?
I can evaluate my RAG system in a local environment.

Hi @Lauorie - you can do this today. See the Ollama quickstart

Does this work for you?