LORAConfig报错:ValueError: Target modules ['q', 'k', 'v'] not found in the base model. Please check the target modules and try again.
nameless0704 opened this issue · 6 comments
nameless0704 commented
我使用train.py完成了训练,并且获得了./saved/finetune_0.pt
文件,但是由于没有inference部分,所以使用LoRA_finetune_with_stanford_alpaca.ipynb
中最后inference模块的时候报错:
Traceback (most recent call last):
File "inference.py", line 49, in <module>
module.query_key_value = peft.tuners.lora.LoraModel(config, module.query_key_value)
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python38\lib\site-packages\peft\tuners\lora.py", line 118, in __init__
self._find_and_replace()
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python38\lib\site-packages\peft\tuners\lora.py", line 181, in _find_and_replace
raise ValueError(
ValueError: Target modules ['q', 'k', 'v'] not found in the base model. Please check the target modules and try again.
使用的LORAConfig是:(这个target在ipynb和train.py里是一致的)
config = LoraConfig(
peft_type="LORA",
r=32,
lora_alpha=32,
target_modules=["q", "k", "v"],
lora_dropout=0.1,
)
为什么会报这个错?之前遇到过但是也没搜到…不知道大家能不能跑通inference。
Data2Me commented
train.py执行时的命令行是什么?
nameless0704 commented
train.py执行时的命令行是什么?
……我就直接python train.py执行了没有用accelerate launch,我用accelerate launch总是报连接错误,但是好像不影响训练,只不过是单核。
lich99 commented
inference之前也需要insert Lora
for key, module in model.named_modules():
if key.endswith('attention'):
try:
# Here we split the query_key_value layer into three linear layer for LoRA. But you can also use merged linear.
qkv_layer = QKV_layer(module.query_key_value.in_features, module.query_key_value.out_features)
qkv_layer.update(module.query_key_value)
module.query_key_value = qkv_layer
except:
pass
module.query_key_value = peft.tuners.lora.LoraModel(config, module.query_key_value)
nameless0704 commented
inference之前也需要insert Lora
for key, module in model.named_modules(): if key.endswith('attention'): try: # Here we split the query_key_value layer into three linear layer for LoRA. But you can also use merged linear. qkv_layer = QKV_layer(module.query_key_value.in_features, module.query_key_value.out_features) qkv_layer.update(module.query_key_value) module.query_key_value = qkv_layer except: pass module.query_key_value = peft.tuners.lora.LoraModel(config, module.query_key_value)
我有这段的,还是报一样的错,从train.py到inference连在一起跑也不行…
lich99 commented
有可能是这样的,如果你的model已经insert过就不需要再insert了,直接load checkpoint就行
nameless0704 commented
有可能是这样的,如果你的model已经insert过就不需要再insert了,直接load checkpoint就行
万分感谢…确实反而是因为多insert了一遍。现在跑通了!