magic-research/PLLaVA

How to train only the projector?

gaowei724 opened this issue · 2 comments

Hi, I'm working on using your vlm as classifier for my own task, but got low precision.In my case, I load your pllava-13b as pretrained model and finetune with lm and projector openning and vm frozen, I suspect that skip the stage of traning projector only is the reason. So I tryed to set model.freeze_lm = True (but model.use_lora = True), but read the output log, I found language model's lora parameter is optimizeble. So, must I turn off lora to frezee all language model's parameters?
image

Hi.

Yes, LoRA are defualt to be trained when use_lora is set to True. the freeze_lm doesn't include freezing the lora weights.

Thx