wafflecomposite/langchain-ask-pdf-local
An AI-app that allows you to upload a PDF and ask questions about it. It uses StableVicuna 13B and runs locally.
Python
Issues
- 1
- 0
- 2
- 2
I have gpu and I expect to run model faster, but your code is only for cpu? how to change it?
#4 opened by alexhmyang - 1
use other pdf will raise error
#5 opened by alexhmyang - 1
Cannot build wheels
#3 opened by MoRadwan21 - 1
validation error for LlamaCpp __root__ Could not load Llama model from path: ./models/stable-vicuna-13B.ggml.q4_0.bin
#2 opened by jimmathew999