Not able to use in inference mode for higher resolution input.
santosh-shriyan opened this issue · 1 comments
santosh-shriyan commented
I am trying to use it in inference mode for input resolution of 1920*1080 pixels image frames (runGan.py 1).
I am using an nvidia 1060 6GB GPU and getting this error.
`tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[1,64,2880,5120] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/conv2d_transpose_1 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[1,64,2880,5120] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node generator/generator_unit/conv_tran2highres/conv_tran2/Conv2d_transpose/conv2d_transpose_1 (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py:1748) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[generator/Assign_1/_203]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
`
I understand that the VRAM is full and this is a memory overload error.
My question is how can I optimize it for my limited resource scenario.
Note: I am facing the same issue at 480p resolution as well. Also I see that many users have posted issues regarding a 2x implementation, which could also help mitigate the problem.
mirh commented
2x is supported here in theory
https://github.com/skycrapers/TecoGAN-PyTorch