CUDA out of memory
javierBarandiaran opened this issue · 0 comments
I am trying to evaluate the Middlebury dataset with these parameters:
main_stereo.py --eval --resume pretrained/gmstereo-scale2-regrefine3-resumeflowthings-middleburyfthighres-a82bec03.pth --val_dataset middlebury --middlebury_resolution F --padding_factor 32 --upsample_factor 4 --num_scales 2 --attn_type self_swin2d_cross_swin1d --attn_splits_list 2 8 --corr_radius_list -1 4 --prop_radius_list -1 1 --reg_refine --num_reg_refine 3
I am getting this out of memory error:
File "/hal/pytorch/unimatch/unimatch/attention.py", line 85, in single_head_split_window_attention scores = torch.matmul(q.view(b_new, -1, c), k.view(b_new, -1, c).permute(0, 2, 1) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.85 GiB (GPU 0; 31.75 GiB total capacity; 23.68 GiB already allocated; 4.12 GiB free; 25.24 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
My GPU has 32GB of memory, do I need more memory or is there an error in the input parameters?
If I include the parameter "--inference_size 1024 1536", as in the demo script, I don't get the error.