danier97/ST-MFNet

CUDA out of memory

Closed this issue · 4 comments

Hi @danielism97 I have RTX2060 6gb vram but it needs more vram to execute. How can I use small length sequences to get inference on lower memory gpu ?

Hi Muhammad,

The model does require quite a bit of memory. For example, to interpolate at a resolution of 960x540, around 9GB is needed on a GPU. This is indeed a limitation of the model.

However, the interpolate_yuv.py script provides options for doing block-wise interpolation, i.e. the frames can be divided into small blocks for inference, and the interpolated blocks are then aggregated to form a single frame. Here is an example:

python interpolate_yuv.py \
--net STMFNet \
--checkpoint <path to pre-trained model (.pth file)> \
--yuv_path <path to input YUV file> \
--size <spatial size of input YUV file, e.g. 1920x1080> \
--out_fps <output FPS, e.g. 60> \
--out_dir <desired output dir> \
--patch_size 256 \
--overlap 3 \
--batch_size 4

The last three arguments specify how the block-wise evaluation is performed. You can experiment with these figures until it can run on your GPU. However, please note that doing interpolation this way can produce blocking artefacts near the block edges.

Thanks.

I tried using those last 3 arguments and I keep getting the exact same error unfortunately.

"RuntimeError: CUDA out of memory. Tried to allocate 1.05 GiB (GPU 0; 11.00 GiB total capacity; 8.26 GiB already allocated; 0 bytes free; 8.96 GiB reserved in total by PyTorch)"

Hi,

Apologies! I didn't enable it properly previously. I just updated the code. Now it should work.

Thanks.

Should be noted that an overlap of 3 throws a fatal error. I raised it to 4 and it seems to be working..