BruceMacD/chatd

:bulb: Feature request > Be able to choose any locally installed model :pray:

adriens opened this issue · 7 comments

Currently, it seems like we cannot choose any locally installed model :

  • Is it possible ?
  • Would it be possible ?

Hi @adriens, thanks for the feature request. It is currently possible, but a bit hidden.

The model can be specified by downloading Ollama. If Ollama is running when chatd starts a settings button will be displayed in the top right corner of the screen which allows specifying a model by name.

I'll document this somewhere.

Screenshot 2023-11-02 at 16 17 48 Screenshot 2023-11-02 at 16 17 56

I believe that is not working on Linux? I have ollama running on the localhost http://127.0.0.1:11434/ however everytime I run chatd it automatically starts downloading an LLM.

My ollama mistral model is working, when I run chatd from the terminal, it recognises that ollama server is running, but it does not show the settings icon. Instead it automatically starts downloading the LLM model. It does not give me the option to use the one I created with ollama.

I created the ollama model the following way. I downloaded the dolphin mistral 7b model (using LMStudio) from Hugging face.

I create the file Modelfile with the following line:

FROM /home/david/.cache/lm-studio/models/TheBloke/dolphin-2.1-mistral-7B-GGUF/dolphin-2.1-mistral-7b.Q6_K.gguf

and then execute:

ollama create localmistral -f Modelfile

then execute ollama serve

And after that chatd. And I get the behaviour mentioned above. What am I missing?

@basillicus Those steps are correct. I've seen some reports of instances on Linux where Ollama wasn't accessible on 127.0.0.1, I'm gonna take a look this and get back to you.

Hey @BruceMacD , thanks a lot fot the tricks 🙏

Hi @BruceMacD , I believe the communication is actually correct.

If I run journalctl -u ollama -f to see the ollama's log on the fly and execute chatd, I can see is ollama who is downloading the model.


Nov 04 12:48:32 cirujano ollama[3473]: [GIN] 2023/11/04 - 12:48:32 | 200 |      54.606µs |       127.0.0.1 | GET      "/"
Nov 04 12:48:34 cirujano ollama[3473]: 2023/11/04 12:48:34 download.go:127: downloading 6ae280299950 in 64 64 MB part(s)
Nov 04 12:48:54 cirujano ollama[3473]: [GIN] 2023/11/04 - 12:48:54 | 200 | 22.413264133s |       127.0.0.1 | POST     "/api/pull"

I am still new in this LLM world and may be missing some important step somewhere.

Thanks for the update basillicus, that explains it. I didn't make the custom model loading with models that aren't available locally already in mind. I'll spin that out into a different issue.

It will work better if you run ollama pull <model name> in a terminal and download the model before switching it in chatd for the time being.

Moved a better "new model download" experience to a new issue. Resolving this one for now as it is generally possible to run custom LLMs.