QwenLM/Qwen-VL

[BUG] <title>Lora训练的时候想同时打开其他部分进行非lora训练,该怎么做

jweihe opened this issue · 2 comments

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

因为模型最后只保存了lora的权重,这里想保存全部的权重

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

zhd36 commented

把所有训练的参数单独存到文件里

to_save = {f"{name}.{param_name}": param.detach().cpu() for name, module in model.named_modules() for param_name, param in module.named_parameters() if param.requires_grad}
torch.save(to_save, 'output_qwen_test/lora_adapter_model.pth')

加载模型时,先用from_pretrained加载模型,然后再重新加载保存的参数

saved_parameters = torch.load('output_qwen_test/lora_adapter_model.pth')
result=model.load_state_dict(saved_parameters, strict=False)

这样貌似可以实现

最后用modules_to_save实现的