mlvlab/Flipped-VQA

finetuned using lamma-13B

Huangbukun opened this issue · 3 comments

Hello, if I want to use llama-13B's pth for fine-tuning, what changes need to be made to the train.sh script? After fine-tuning according to the parameters of llama-7B, the accuracy is very low.

In our experiments, I changed --adapter_layer from 32 to 40, and you may also decrease the learning rate.

In our experiments, I changed --adapter_layer from 32 to 40, and you may also decrease the learning rate.

thank you!

In our experiments, I changed --adapter_layer from 32 to 40, and you may also decrease the learning rate.

Hello, after I adjusted --adapter_layer to 40, I changed the learning rate to 9e-3, but your result in the double line can only reach 65%.I don't know what I did wrong