Error run inference t5_unet
NguyenNhoTrung opened this issue · 4 comments
I have error when I run code, how to fix:
(lavi-bridge) user@hg-ai-02:/hdd/trungnn/LaVi-Bridge/test$ bash run.sh
/home/user/miniconda3/envs/lavi-bridge/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5_fast.py:160: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5.
For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with truncation is True
.
- Be aware that you SHOULD NOT rely on t5-large automatically truncating your input to 512 when padding/encoding.
- If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with
model_max_length
or passmax_length
when encoding/padding. - To avoid this warning, please instantiate this tokenizer with
model_max_length
set to your preferred value.
warnings.warn(
Traceback (most recent call last):
File "/hdd/trungnn/LaVi-Bridge/test/t5_unet.py", line 116, in
main(args, prompts)
File "/hdd/trungnn/LaVi-Bridge/test/t5_unet.py", line 46, in main
monkeypatch_or_replace_lora_extended(
File "/hdd/trungnn/LaVi-Bridge/test/../modules/lora.py", line 780, in monkeypatch_or_replace_lora_extended
_module._modules[name] = _tmp
UnboundLocalError: local variable '_tmp' referenced before assignment
Also, I have error when create conda with LaVi-Bridge/environment.yaml:
The conflict is caused by:
The user requested huggingface-hub==0.17.3
diffusers 0.24.0 depends on huggingface-hub>=0.19.4
The error about LoRA may be due to the _find_modules function in the lora.py not finding target classes like nn.Linear or nn.Conv2d. This could be caused by the class of linear or convolutional layer defined in T5 or U-Net. For example, if the peft package is not installed, the convolutional layers in U-Net would be of the LoRACompatibleConv class instead of nn.Conv2d. As a result, _find_modules would fail to find the nn.Conv2d class for LoRA integration in U-Net. Then, code in line 724 or line 749 in lora.py will not be executed, leading to the error "UnboundLocalError: local variable '_tmp' referenced before assignment". You can check if the peft package is installed or debug based on the reasons mentioned above.
I already have fixed code. Thank you so much.
@NguyenNhoTrung how you fixed the problem "UnboundLocalError: local variable '_tmp' referenced before assignment"
The error about LoRA may be due to the _find_modules function in the lora.py not finding target classes like nn.Linear or nn.Conv2d. This could be caused by the class of linear or convolutional layer defined in T5 or U-Net. For example, if the peft package is not installed, the convolutional layers in U-Net would be of the LoRACompatibleConv class instead of nn.Conv2d. As a result, _find_modules would fail to find the nn.Conv2d class for LoRA integration in U-Net. Then, code in line 724 or line 749 in lora.py will not be executed, leading to the error "UnboundLocalError: local variable '_tmp' referenced before assignment". You can check if the peft package is installed or debug based on the reasons mentioned above.
Thanks for your work.I have the same issue, but the problem still persists even after installing the peft package. I found that the peft package is not being called.