Try start train on CPU Intel
PetroRudyi opened this issue · 2 comments
PetroRudyi commented
Hello
I tried to run train_fill50k.py, but I get this error. I removed --mixed_precision="fp16" from the launch command and got this result. Running the original command on the GPU in the collab also returns this result.
05/08/2023 14:04:05 - INFO - __main__ - ***** Running training *****
05/08/2023 14:04:05 - INFO - __main__ - Num examples = 50000
05/08/2023 14:04:05 - INFO - __main__ - Num Epochs = 100
05/08/2023 14:04:05 - INFO - __main__ - Instantaneous batch size per device = 1
05/08/2023 14:04:05 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 1
05/08/2023 14:04:05 - INFO - __main__ - Gradient Accumulation steps = 1
05/08/2023 14:04:05 - INFO - __main__ - Total optimization steps = 5000000
Steps: 0%| | 0/5000000 [00:00<?, ?it/s]
Steps: 0%| | 0/5000000 [00:00<?, ?it/s]/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/schedulers/scheduling_ddpm.py:172: FutureWarning: Accessing `num_train_timesteps` directly via scheduler.num_train_timesteps is deprecated. Please use ` instead`
deprecate(
Traceback (most recent call last):
File "/Users/petro/PycharmProjects/ControlLoRA/train_text_to_image_control_lora.py", line 1006, in <module>
main()
File "/Users/petro/PycharmProjects/ControlLoRA/train_text_to_image_control_lora.py", line 782, in main
model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py", line 695, in forward
sample, res_samples = downsample_block(
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/unet_2d_blocks.py", line 867, in forward
hidden_states = attn(
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/transformer_2d.py", line 265, in forward
hidden_states = block(
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/attention.py", line 294, in forward
attn_output = self.attn1(
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/attention_processor.py", line 243, in forward
return self.processor(
File "/Users/petro/PycharmProjects/ControlLoRA/models.py", line 230, in __call__
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/models/attention_processor.py", line 302, in prepare_attention_mask
deprecate(
File "/Users/petro/PycharmProjects/ControlLoRA/venv/lib/python3.9/site-packages/diffusers/utils/deprecation_utils.py", line 18, in deprecate
raise ValueError(
ValueError: The deprecation tuple ('batch_size=None', '0.0.15', 'Not passing the `batch_size` parameter to `prepare_attention_mask` can lead to incorrect attention mask preparation and is deprecated behavior. Please make sure to pass `batch_size` to `prepare_attention_mask` when preparing the attention_mask.') should be removed since diffusers' version 0.15.0 is >= 0.0.15
HighCWu commented
maybe you should use diffusers==0.13.0
PetroRudyi commented
Heh
Thanks, something started\