huggingface/peft

Add support for OpenELM LoRA fine-tuning

RonanKMcGovern opened this issue · 2 comments

Feature request

Allow for fine-tuning with LoRA peft.

Motivation

Full-fine-tuning is difficult for these models and does not generate great results. Perhaps LoRA would provide useful regularization/smoothing and improved results.

Your contribution

Right now, when I run LoRA fine-tuning, it appears that all of the modules remain trainable...

It seems that OpenELMForCausal is not supported by peft?

Note this error when trying to print trainable params:

AttributeError: 'OpenELMForCausalLM' object has no attribute 'print_trainable_parameters'

after trying:

trainer.model.print_trainable_parameters()