Issues
- 0
- 0
run python chat.py ERROR
#42 opened by moon-fall - 0
- 0
Response is being stopped before finishing
#40 opened by vashat - 6
Custom Finetuned Models?
#32 opened by mallorbc - 3
Towards a C++ library
#36 opened by A2va - 1
Register on PyPi and add pip install support.
#7 opened by Ayushk4 - 2
- 1
Keep model in RAM?
#35 opened by mallorbc - 2
- 2
- 0
Improve interface.
#25 opened by Ayushk4 - 5
Upload Open-Chat-Kit models at int4 on huggingface and add the URL mapping to interface.py
#27 opened by Ayushk4 - 5
- 1
Add support for GLM/chatGLM models
#20 opened by MarkSchmidty - 0
Add GPT-JT model
#11 opened by Ayushk4 - 0
Add benchmark tasks like BIG Bench
#21 opened by kamalojasv181 - 0
Guide for install on windows.
#19 opened by HCBlackFox - 0
How to load model from disk?
#18 opened by HCBlackFox - 0
How to make in windows?
#17 opened by HCBlackFox - 0
- 6
Simplifying the quantization pipeline
#9 opened by kamalojasv181 - 1
- 0
- 2
Upload GPTQ Quantized models in 4-bit precision format for different bin-sizes to Huggingface
#2 opened by Ayushk4 - 0
- 0
- 0
- 0
Add support for more architectures.
#1 opened by Ayushk4