imoneoi/openchat

About performance of LLama (llama2, llama3) model

Opened this issue · 1 comments

Thank you for your wonderful work!

Have you ever experimented with LLama2-7B as the model to do C-RLFT? How about the performance? Because OpenChat-3.5-0106 is based on Mistral, performance is really high, I have tried using LLama2-7B, the performance is not satisfied.

Another 2 questions: can chat model be used as the model to do C-RLFT? I think, some code needs to be done, e.g., chat template, etc.
How about LLama3-8B-instuct, how to easy train, any performance data?

Thanks in advance.

Hi @huazhenliu We've tried Llama 2 13B, the performance is worse than Mistral 7B, so we've chosen Mistral 7B as the base model.