Reproduce error when 'Use LoRA'
gynchoi opened this issue · 1 comments
Hello,
I encountered the error that optimization is never terminated when using LoRA
You can see from the below image that the processing time is more than 2000s.
I used the same image that was well reproduced in Issue #2
Other parameters are the same as the default.
If I ran without LoRA, it
Terminal message is as below:
Running on local URL: ...
Running on public URL: ...
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
/<PATH>/miniconda3/envs/sdedrag/lib/python3.9/site-packages/diffusers/models/attention_processor.py:1946: FutureWarning: `LoRAAttnProcessor2_0` is deprecated and will be removed in version 0.26.0. Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`
deprecate(
I think it can be due to the environment (package version conflict), so can you distribute a more precise environment file?
Dependency, you provided, already caused some errors, since this repo does not support the newest Gradio version.
I reproduced in this environment, but still LoRA doesn't working.
conda create -n sdedrag python=3.9
conda activate sdedrag
pip install torch==2.0.0 torchvision transformers
pip install diffusers==0.25.1 accelerate==0.17.0 gradio==3.41.1 opencv-python
I think it is largely because the process is terminated, sde-drag will show a progress bar when the lora is trained. You can try to run the code again.