alexrozanski/LlamaChat

Failed to load model for eachadea/ggml-vicuna-7b-1.1

fakechris opened this issue · 2 comments

After I downloaded eachadea/ggml-vicuna-7b-1.1's ggml-vicuna-7b-1.1-q4_0.bin
model from https://huggingface.co/eachadea/ggml-vicuna-7b-1.1/tree/main, I was able to add Chat Source successfully.
However, during the conversation, an error "Failed to load model" occurred.
I also tried llama.cpp, and I could load the model only after updating to the latest llama.cpp. The llama.cpp from 5 days ago would also fail to load the model. I'm not sure if the ggml model in llama.cpp has been modified in any way.

Hey @fakechris, I know there have been some changes to llama.cpp in the last week, I'm working on updating the bindings so that these are now supported. I haven't tested Vicuna support specifically either, and that's coming

Vicuna works with the same sort of parameters as plain llama, but requires the "User:" prompt to be used AFAIK