About performance of LLama (llama2, llama3) model
Opened this issue · 1 comments
huazhenliu commented
Thank you for your wonderful work!
Have you ever experimented with LLama2-7B as the model to do C-RLFT? How about the performance? Because OpenChat-3.5-0106 is based on Mistral, performance is really high, I have tried using LLama2-7B, the performance is not satisfied.
Another 2 questions: can chat model be used as the model to do C-RLFT? I think, some code needs to be done, e.g., chat template, etc.
How about LLama3-8B-instuct, how to easy train, any performance data?
Thanks in advance.
imoneoi commented
Hi @huazhenliu We've tried Llama 2 13B, the performance is worse than Mistral 7B, so we've chosen Mistral 7B as the base model.
- For your second question, it's OK. We can do C-RLFT on any model. You can edit the chat template here https://github.com/imoneoi/openchat/blob/master/ochat/config/__init__.py
- We're actively working on a new version based on Llama-3-8B