ymcui/Chinese-LLaMA-Alpaca-3

merge lora model 时出現 error

MonetCH opened this issue · 1 comments

提交前必须检查以下项目

  • 请确保使用的是仓库最新代码(git pull)
  • 已阅读项目文档FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
  • 第三方插件问题:例如llama.cpptext-generation-webui等,建议优先去对应的项目中查找解决方案。

问题类型

模型转换和合并

基础模型

Llama-3-Chinese-8B(基座模型)

操作系统

Linux

详细描述问题

在使用 merge_llama3_with_chinese_lora_low_mem.py 进行lora model merge的时候出现这个问题

Traceback (most recent call last):
  File "/mlsteam/data/Q21/nick/Chinese-LLaMA-Alpaca-3/scripts/merge_llama3_with_chinese_lora_low_mem.py", line 234, in <module>
    lora_config = peft.LoraConfig.from_pretrained(lora_model_path)
  File "/usr/local/lib/python3.10/dist-packages/peft/config.py", line 137, in from_pretrained
    config = config_cls(**kwargs)
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'enable_lora'

我的base model为 meta-llama/Meta-Llama-3-8B,但我的lora模型是用chinese-llama2版本的run_pt.sh进行训练的,不知道是否有影响到?


依赖情况(代码类问题务必提供)

# 请在此处粘贴依赖情况(请粘贴在本代码块里)
bitsandbytes              0.43.1
peft                      0.7.1
pytorch-quantization      2.1.2
torch                     2.3.0a0+ebedce2
torch-tensorrt            2.3.0a0
torchdata                 0.7.1a0
torchtext                 0.17.0a0
torchvision               0.18.0a0
transformers              4.40.0

运行日志或截图

# 请在此处粘贴运行日志(请粘贴在本代码块里)
python3 merge_llama3_with_chinese_lora_low_mem.py 
        --base_model meta-llama/Meta-Llama-3-8B 
        --lora_model ../../Chinese-LLaMA-Alpaca-2/scripts/training/output/checkpoint-241690/pt_lora_model/ 
        --output_type huggingface
================================================================================
Base model: meta-llama/Meta-Llama-3-8B
LoRA model: ../../Chinese-LLaMA-Alpaca-2/scripts/training/output/checkpoint-241690/pt_lora_model/
Loading ../../Chinese-LLaMA-Alpaca-2/scripts/training/output/checkpoint-241690/pt_lora_model/
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Traceback (most recent call last):
  File "/mlsteam/data/Q21/nick/Chinese-LLaMA-Alpaca-3/scripts/merge_llama3_with_chinese_lora_low_mem.py", line 234, in <module>
    lora_config = peft.LoraConfig.from_pretrained(lora_model_path)
  File "/usr/local/lib/python3.10/dist-packages/peft/config.py", line 137, in from_pretrained
    config = config_cls(**kwargs)
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'enable_lora'

已自行處理完畢。
解決方法:
把lora model中adapter_config.json裡的enable_lora和merge_weights刪除即可。