afaqueumer/DocQA

Could not load Llama model from path

Opened this issue · 3 comments

xmagcx commented

52, in _run_script
exec(code, module.dict)
File "C:\Users\mauri\Downloads\DocQA-main\DocQA-main\app.py", line 42, in
llm = LlamaCpp(model_path="./models/llama-7b.ggmlv3.q4_0.bin")
File "C:\Users\mauri\Downloads\DocQA-main\DocQA-main\venv\lib\site-packages\langchain\load\serializable.py", line 74, in init
super().init(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp
root
Could not load Llama model from path: ./models/llama-7b.ggmlv3.q4_0.bin. Received error Model path does not exist: ./models/llama-7b.ggmlv3.q4_0.bin (type=value_error)

What version of python are you using?

I guess you need to edit the path or place the model in the same directory. This is a path error it was hard coded.

Hey @xmagcx , could you solve the problem?

Replace pipenv by python -m