Allow Pulling New Models without Going Through Ollama
BruceMacD opened this issue · 0 comments
BruceMacD commented
This is spun out from #1
Right now you can run custom LLMs using Ollama, but chatd expects that the model has already been downloaded before Ollama is connected to chatd. chatd should be able to handle downloading new models directly, without using the Ollama CLI.
Workaround: run ollama pull <model name>
in a terminal and download the model before switching it in chatd for the time being.