alishobeiri/thread

Allow using a local LLM

Closed this issue · 2 comments

Will it be possible to add the ability to use a locally running solution, like Ollama?

Yes! You will be able to use it with a locally running Ollama model very soon - it's in the works and should be pushed shortly. We'll update here when it's ready!

Ollama support is officially in beta with this commit, and is available in v0.1.9! Here's a demo video:

ollama-support.mp4

The new model selector allows you to choose which model to use - either OpenAI or a local Ollama model. By pointing to the Ollama URL and entering the name of the running model, Thread can use AI fully locally!

Sidenote: Local models work well for the chat, but I experienced a bit of trouble with respecting function calls for the code generation / edit tasks. We'll be hard at work getting those up and running. 🤝

Closing this issue out since local LLMs are working - please feel free to give it a try and raise any additional issues!