johnsmith0031/alpaca_lora_4bit

Merging LoRA after finetune

Opened this issue · 1 comments

Some questions about merging the LoRA back to the base model...

  1. Should the LoRA finetuned on the 4-bit GPTQ model be merged back to the fp16 version of the same model?
  2. When merging the LoRA into the fp16 model, is it recommended to use the PeftModel.merge_and_unload method?
  3. Do you expect the generation speed to increase when using the merged model that is GPTQ'ed after merging, compared to the base GPTQ model with LoRA applied on top of it?
  1. Technically you can do it but the performance would be worse than that on the model where the lora is originally trained.
  2. Not sure about it.
  3. I think you can try exllama. It's inference speed is very fast (even with lora, it's still faster than the original fp16 model).