jianzhnie/LLamaTuner

微调后的Llama-2-7b,在模型加载时出错

Closed this issue · 1 comments

微调模型以后,我调用一下命令进行推理测试时,报错:
---- 指令 ----
python gradio_webserver.py
--model_name_or_path model_inference/model_path/Llama-2-7b-chat-ms
--lora_model_name_or_path ~/model_inference/model_path/checkpoint-344

--- 报错 ----
Loading the LoRA adapter from model_inference/model_path/checkpoint-344
Traceback (most recent call last):
File "model_inference/Efficient-Tuning-LLMs/chatllms/utils/apply_lora.py", line 90, in
apply_lora(base_model_path=args.base_model_path,
File "model_inference/Efficient-Tuning-LLMs/chatllms/utils/apply_lora.py", line 69, in apply_lora
model: PreTrainedModel = PeftModel.from_pretrained(base_model,
File "model_inference/peft/src/peft/peft_model.py", line 304, in from_pretrained
config = PEFT_TYPE_TO_CONFIG_MAPPING[
File "~/model_inference/peft/src/peft/config.py", line 134, in from_pretrained
config = config_cls(**kwargs)
TypeError: LoraConfig.init() got an unexpected keyword argument 'loftq_config'

请问这个问题如何解决?

谢谢作者的指导,嘿嘿,通过在微信上交流解决了,是PEFT版本的问题。使用0.4.0就可以正常跑了,我微调的时候是0.63.0和部署环境不一样 所以才出问题