kennethleungty/Llama-2-Open-Source-LLM-CPU-Inference
Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A
PythonMIT
Issues
- 1
How, where and what to configure if I have to migrate this to a GPU based system??? Please help me with this
#30 opened by VIGHNESH1521 - 1
where should we need to make modifications for it run on GPU based systems?
#29 opened by VIGHNESH1521 - 4
can we use a gpu for increased speed and use of a bigger better llama2 model
#7 opened by stevedipaola - 0
Any idea how to let it remember all previous prompt and answers like ChatGPT so it can have continuous chat?
#28 opened by jhhspace - 0
- 0
Can I run the same code for different data formats like .xlsx, .docx, .txt and .pptx? or should I include or modify any part of the code? If yes, please share the code
#26 opened by VIGHNESH1521 - 1
I want to add my personal files.
#25 opened by VIGHNESH1521 - 0
git lfs error
#23 opened by arun-raze19 - 0
Please copy / post instructions in GitHub.. Medium blog instructions are paywalled
#16 opened by gidzr - 0
- 0
[Feature Request] Support InternLM Deploy
#21 opened by vansinhu - 0
config customization
#20 opened by AleksandrTulenkov - 2
how to change data files?
#19 opened by Janeyanhong - 0
A Question.
#18 opened by manbehindthemadness - 1
- 2
error model config
#15 opened by malv-c - 2
- 0
Aborted (core dumped) when executing dbqa()
#14 opened by wennycooper - 0
Support for 70b by updating ctransformers
#12 opened by thirtysix - 0
Configure system prompt
#11 opened by sumitsoman - 1
- 0
how to improve response time ?
#9 opened by pawanGithub10 - 1
license
#2 opened by lakshmanok - 1
requesting a requirements.txt
#1 opened by isayahc