BAAI-DCAI/Bunny

about the training strategy for Llama-3-8B

Closed this issue · 2 comments

Hi, thanks for your work!
You mentioned that "we used better strategies to train Phi-3-Mini-based and Llama-3-8B-based Bunny", so I would like to ask, what kind of strategy did you use when training Llama-3-8B-based Bunny? And when do you plan to make the finetune_lora.sh of Llama-3-8B-based Bunny public?

All of the training strategy and data of latest Bunny is released! Check more details about Bunny in Technical Report, Data and Training Tutorial!

Close the issue for now if there's no further discussions. Feel free to reopen it if there's any other questions.