pprp/Pruner-Zero

Regarding Sparsity of LoRA fine-tuned model.

Closed this issue · 1 comments

Hello @pprp,

The lora fine-tuned models cannot be merged as the lora branch is dense. This makes the LoRA fine-tuning not very useful to regain performance lost during pruning. Please let me know if there is any sparse lora fine-tuning that you are employing.

pprp commented

hi, sorry for the late reply.

For lora fine-tuning, we just employ the same method as Wanda, which is not taking the sparsity into consideration.

For sparse lora-finetuning, you can refer to :

  • LoRAPrune: Structured Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning
  • Sparse Fine-tuning for Inference Acceleration of Large Language Models
  • APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference (ICML24 Oral)