Pretrain and Finetune template versions
Opened this issue · 1 comments
xin-li-67 commented
Hi,
I noticed that the --version
arg in both the pretrain and finetune scripts is passed with v1, which is different from the original LLaVA&LLaVA-1.5 and other LLaVA style projects. Do you have any ideas on why you chose to do this?
Best,
Yaxin9Luo commented
Hi, v1 is more like a default setting in the finetune stage of MLLM if you use LLaMA2, as less has trained a more powerful version of llama called vicuna. I am not the authors, hope this can still help you.