soft_skel function memory issue
Closed this issue · 1 comments
Hello, I apply soft-cldice loss to a segmentation network with Cityscapes as an input (2048x1024x3).
For 16 batch size, the network is trained well with other losses like cross-entropy.
However, I've met "CUDA out of memory" when I use the soft-cldice loss with the smaller batch size. (even for 1 or 2 batch size)
I use pytorch and the NVIDIA 1080TI with 12GB memory.
I've traced the memory allocation of my graphic card with command "watch -n 1 nvidia-smi", so I found out the issue occurs on "for loop in the soft_skel" function.
How can I solve this problem?
Hi,
unfortunately the skeletonization is a sequential operation, and increases the memory. FOr your example you may be able to do one of the following things:
-
decrease the iterations of the soft-skeletonization. Most likely your roads are rather "thin" pixelwise. As a rule of thumb you want roughly as many iterations as the radius of your thickest road.
-
use smaller patches. You could train your model on smaller ROI's of the image, like (192x192) which are randomly sampled.
-
may not be practical, but use a larger GPU.
Also a combination of these may solve the problem optimally.