PromtEngineer/localGPT
Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
PythonApache-2.0
Issues
- 1
Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
#796 opened by dportabella - 1
How to authenticate to huggingface.co, from the run_localGPT.py script, using Docker?
#797 opened by dportabella - 1
problem when ingesting (just CPU)
#783 opened by alexmc6 - 8
run_localGPT_API
#788 opened by Suiji12 - 10
No module named 'triton'
#761 opened by atyara - 8
INSTRUCTOR._load_sbert_model() got an unexpected keyword argument 'token'
#722 opened by phoenixvictory - 0
- 10
Support llama-3
#789 opened by boixu - 1
- 0
- 0
- 1
- 1
- 8
Not using the GPU
#768 opened by CODE-SAURABH - 1
Docker not using GPU
#746 opened by r2d2levrai - 0
error in /opt/nvidia/nvidia_entrypoint.sh
#787 opened by perler - 1
- 2
Mistral not supported
#778 opened by testercell - 0
Extra Options with run_localGPT_API.py?
#786 opened by carloposo - 1
cpp-llama-python not found.
#782 opened by NitkarshChourasia - 3
- 2
Unable to load llama model from path
#726 opened by shibbycribby - 2
How to make LocalGPT to translate everything into English language before store and process inputs
#725 opened by PayteR - 1
Download the source document
#736 opened by RishithEllathMeethalVeridos - 8
"TypeError: 'HuggingFaceInstructEmbeddings' object is not callable" after enter a query
#731 opened by Serializ3r - 1
Question: How to run UI from Docker?
#742 opened by Apotrox - 4
Hugging Faces Down: Unable to run models that have already been downloaded.
#744 opened by ABottleOJack - 4
- 3
[Question] Can you help me, please? 100k PDFs!
#757 opened by MatteoRiva95 - 2
Autoawq
#777 opened by testercell - 0
- 0
- 1
use_history in API
#770 opened by prakashebi - 1
Could the project support the lastest model Gemma which has much higher performance than LLAMA2?
#759 opened by Zephyruswind - 4
UI is a blank web page
#767 opened by dasqueel - 0
Getting error when I try to python ingest.py
#766 opened by vascubrian - 5
Can localGPT support Chinese?
#756 opened by Yaqing2023 - 0
Wrong answer
#764 opened by bansal247 - 1
Error when starting python ingest.py
#762 opened by Shelushun - 1
Your GPU is probably not used at all, which would explain the slow speed in answering.
#750 opened by thomasmeneghelli - 7
AttributeError: 'LlamaRotaryEmbedding' object has no attribute 'cos_cached'. Did you mean: 'sin_cached'?
#755 opened by TheMrSeven - 6
- 0
KeyError: 'Cache only has 0 layers, attempted to access layer with index 0 - TheBloke/WizardLM-30B-Uncensored-GPTQ
#743 opened by bp020108 - 1
- 3
Docker Build no module named 'utils'
#739 opened by Apotrox - 0
Exllama kernel does not support query
#740 opened by bp020108 - 0
GPU layers/batch size/models selction
#738 opened by bp020108 - 6
- 2
Ingestion Error / Batch processing
#724 opened by lavericklavericklaverick - 0