OOM issue + pytorch version request
Fannzi opened this issue · 3 comments
Thanks for the great work!
When trying to train your network, I met with OOM issues, are there ways to reduce the RAM consumption during network training ( since the batch_size is already 1)? How much RAM is required to train this network?
btw, hope that the PyTorch version can be implemented soon.
Thanks again for the work!
Hi, I am trying to implement the PyTorch version in my spare time.
I think it will take some time. I will do my best.
Is OOM error coming from GPU or CPU RAM?
I don't exactly remember how much memory space is required, but I've never had a problem with a memory issue.
You might wanna reduce 1) batch size (which you already tried..) 2) patch size 3) network size (blur estimation network, domain adaptation network, or sharpness calibration network).
But note that the performance of the network might decrease.
Hi, thanks for your reply.
The OOM is coming from GPU RAM. I will try reducing the patch size. Thanks again!
I've trained the network in Nvidia Titan XP (12GB), with the setting provided in the config.py
file.