yuval-alaluf/stylegan3-editing

GPU RAM grows to OOM during inference

ikcla opened this issue · 2 comments

ikcla commented

I tried to use inversion/video/inference_on_video.py for inferencing a test video. I realized that with 11GB RAM on GPU, the process will manage up to 599 frames and go out of GPU RAM. When I look at the first layer of the code, It is processing one frame at a time. I wonder why GPU RAM keep accummulate during the for loop. I tried using del, gc.collect(), torch.cuda.empty_cache() in the for loop, but it does not help. Do you have any pointers for this issue? Thanks.

The memory keeps growing since we store the results in the following line:

results = {"source_images": [], "result_images": [], "result_latents": {}, "landmarks_transforms": []}

One option is to change the following line:
results["result_images"].append(result_batch[0][-1])
results["result_latents"][image_name] = latents[0][-1]

You could try doing latents[0][-1].cpu() and result_batch[0][-1]
You may need to move them back to GPU later, but here it should be able to run without accumulating more memory.

Hope this helps!

ikcla commented

@yuval-alaluf
Thank you for your help.
I already profiled the code, and found out that the GPU memory accumulate faster due to result images instead of latents since image store as tensor is larger compare to latents. As above code, it kept result images as tensors in gpu with reference it in dictionary which will get my low memory gpu into a problem. I have to move them back to cpu and save it to disk in order to run sequential tasks for 5 minutes video.
I found other things when I profiled this code is that we kept everything in cpu memory like aligned images, cropped images, and result images which will make OOM for cpu as well since Image object is large(My system has 128 GB ram, but cannot handle 5 mintues video). I have to save to disk and use lazy load for single iter atomically.

You guy did a great work, and I learned so much from it. Do you have any experience with ffhqu pretrain in this experiment?