BrokenSource/DepthFlow

Memory Leak in DepthFlow: GPU Usage Increases with Each Video Creation

muhammad-ahmed-ghani opened this issue ยท 2 comments

๐Ÿ”˜ Operating system

Linux (Ubuntu-like)

๐Ÿ”˜ Runtime enviroment

PyPI Wheels (pip, poetry, pdm, rye, etc)

๐Ÿ”˜ Python version

Python 3.10

๐Ÿ”˜ GPU Vendor

NVIDIA Desktop

๐Ÿ”˜ Description

Hi @Tremeschin
When running the video creation process multiple times, the GPU memory usage increases by approximately 70 MB with each execution. For instance, after running the process 10 times, the GPU memory consumption increases by around 700 MB.

It appears that the issue is not related to the DepthAnythingV2 model, as the depth map memory is being cleared correctly. Instead, the problem seems to originate from the DepthFlow component, which may not be releasing memory as expected.

Could you please assist with identifying the cause and providing a solution to resolve this memory accumulation? Thank you.

๐Ÿ”˜ Traceback

No response

I've successfully rendered about 200x videos with this script file without memory leaks;

The most important lines are sharing a depth estimator and releasing the opengl context after using a scene (should happen automatically, but better do it explicitly instead of trusting the garbage collector). Nevertheless, 70 MB feels like the later is the case, as a torch model would be much bigger

You can also just have a single scene and estimator class loaded and iterate/render a bunch of videos with it, but it depends if you're doing parallel encodings/threading or not. Also, sharing/keeping loaded scenes or estimator models gets nice performance improvements, if it's possible in your case then go for it :)

let me know if any of these help, they're the most common way to leak memory on batch processing

Yes scene.window.destroy() does the job for me. Thanks.