failed to convert llama_2 7B model in .gguf to .bin format
adi-lb-phoenix opened this issue · 2 comments
adi-lb-phoenix commented
I have tried to convert llama 2 model from .gguf to .bin
~/llm_inferences/llama.cpp/models/meta$ ls
llama-2-7b.Q4_K_M.gguf
python3 export.py llama2_7b.bin --meta-llama /home/####/llm_inferences/llama.cpp/models
Traceback (most recent call last):
File "/home/aadithya.bhat/llm_inferences/llama2.c/export.py", line 559, in <module>
model = load_meta_model(args.meta_llama)
File "/home/aadithya.bhat/llm_inferences/llama2.c/export.py", line 373, in load_meta_model
with open(params_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/aadithya.bhat/llm_inferences/llama.cpp/models/params.json'
I have downloaded this model from https://huggingface.co/TheBloke/Llama-2-7B-GGUF, the model with name ending with Q4_K.gguf
chsasank commented
llama2.c supports conversion of original llama2 checkpoints, not quantised ones in gguf format.
adi-lb-phoenix commented
Yes noted. I did not follow the instructions carefully.