OOM Problem when using pixel2pixel-zero
Opened this issue · 0 comments
nightrain-vampire commented
When I try to edit one image with the method pixel2pixel-zero on RTX3090, 24G, It reports:
editing image [scripts/0_right.jpg] with [directinversion+pix2pix-zero]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:16<00:00, 3.11it/s]
Traceback (most recent call last):
File "run_editing_pix2pix_zero_one_image.py", line 213, in <module>
edited_image = edit_image_directinversion_pix2pix_zero(
File "run_editing_pix2pix_zero_one_image.py", line 127, in edit_image_directinversion_pix2pix_zero
latent_list, x_inv_image, x_dec_img = pipe(
File "/data/user3/edit4fairness/PnPInversion/models/pix2pix_zero/ddim_inv.py", line 146, in __call__
image = self.decode_latents(latents.detach())
File "/data/user3/edit4fairness/PnPInversion/models/pix2pix_zero/base_pipeline.py", line 271, in decode_latents
image = self.vae.decode(latents).sample
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/diffusers/models/autoencoder_kl.py", line 144, in decode
decoded = self._decode(z).sample
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/diffusers/models/autoencoder_kl.py", line 116, in _decode
dec = self.decoder(z)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/diffusers/models/vae.py", line 188, in forward
sample = up_block(sample)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/diffusers/models/unet_2d_blocks.py", line 1714, in forward
hidden_states = resnet(hidden_states, temb=None)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/diffusers/models/resnet.py", line 488, in forward
hidden_states = self.conv2(hidden_states)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/data/user3/miniconda3/envs/p2pzero/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.65 GiB total capacity; 13.60 GiB already allocated; 23.56 MiB free; 13.82 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Why the OOM occurs since I only edit one image? How can I solve it?