smallcloudai/refact

Add Code LLaMA

klink opened this issue · 4 comments

klink commented
Add Code LLaMA
klink commented

please provide description for model capabilities (chat/ code-completion/ fine-tuning), gpu requirements and official links @JegernOUTT

https://huggingface.co/TheBloke/CodeLlama-7B-fp16
code-completion / finetuning
20Gb+ for finetune, 15Gb+ for inference

Is there any ETA to support Code Llama fine-tuning?

klink commented

It's already live in our nightly docker, you can test it there, and we plan to release it to everyone ~next week.