henomis/lingoose

Possible to use with a local LLM?

chrisbward opened this issue ยท 4 comments

As titled, would prefer to use a local LLM instead of OpenAI's GPT. I arrived here via this tutorial/introduction to RAG;

https://simonevellei.com/blog/posts/leveraging-go-and-redis-for-efficient-retrieval-augmented-generation/

I suggest to use localAI and use a custom LLM. Then connect LinGoose to LocalAI using a custom openai client (WithClient( )) with local endpoint

@henomis Can you comment on why localAI and not Ollama?

nvm, i see. it means you don't have to do any work.

shame, because Ollama presents much nicer development ergonomics, specifically it's similarity to docker:

  • Dockerfile ๐Ÿ‘‰๐Ÿป Modelfile
  • docker build ... ๐Ÿ‘‰๐Ÿป ollama build ...

@airtonix I will check this project and the possibility of integrating it into Lingoose. Thanks for the suggestion.

Ollama will be supported in the next lingoose version.