Issues
- 8
test issue
#62 opened by kerthcet - 11
Support fall back across several providers
#60 opened by kerthcet - 1
Support counting tokens
#59 opened by kerthcet - 0
Support deepspeed-fastgen as another backend
#57 opened by kerthcet - 1
Support Baichuan2 Model
#34 opened by kerthcet - 0
vLLM not working as expected with ChatGLM2
#55 opened by kerthcet - 0
Support Stream output
#52 opened by kerthcet - 0
Support loading different adapter
#51 opened by kerthcet - 2
Support vLLM as the backend
#32 opened by kerthcet - 0
Support for Baichuan LLM
#15 opened by kerthcet - 0
Support serving via HTTP/RPC server
#40 opened by kerthcet - 0
Support serving fine-tuning layers easily
#39 opened by kerthcet - 0
Integration with langchain
#38 opened by kerthcet - 0
Support quantization
#37 opened by kerthcet - 0
support text-generation-inference
#35 opened by kerthcet - 0
Add codeLlama example
#33 opened by kerthcet - 0
Support system_prompt can be empty
#31 opened by kerthcet - 3
recognize model
#30 opened by Jerry-Kon - 1
Upload ChatLLM to PyPi
#5 opened by kerthcet - 0
Support for codeLlama
#27 opened by kerthcet - 0
Add support to ChatGPT API
#6 opened by kerthcet - 2
- 0
Support for StableLM
#11 opened by kerthcet - 0
Add support to Falcon LLM
#8 opened by kerthcet - 0
Add support to Claude-2 LLM
#7 opened by kerthcet - 0
Add test framework via github action
#4 opened by kerthcet