vicuna-tools/vicuna-installation-guide
The "vicuna-installation-guide" provides step-by-step instructions for installing and configuring Vicuna 13 and 7B
Issues
- 1
unable to load model
#19 opened by sweihub - 0
Example of using embedding
#18 opened by cclinus - 2
General Question
#17 opened by Proper231 - 9
Unable to load model error still occurring
#14 opened by uptogodown - 1
how to generate a llama.cpp server with fastchat api
#16 opened by xx-zhang - 2
run issue
#13 opened by Tsunami014 - 3
make -j error
#11 opened by ugmqu - 3
A little confused.
#9 opened by TeaCult - 1
gpu inference
#10 opened by ziliangpeng - 2
Different and sometimes wrong answers with ggml-vic13b-q5_1.bin + ggml-vic13b-uncensored-q5_1.bin
#8 opened by breisig - 1
why github.com/fredi-python/llama.cpp
#7 opened by ziliangpeng - 1
Run it on GPU
#6 opened by kilkujadek - 9
- 2
Installation failure: make -j gives error
#2 opened by sujitpal - 4
Hugging face wget command fails with Username/Password Authentication Failed.
#3 opened by AverageGuy - 0
Doesnt open after pressing any key
#1 opened by Snoft