WangYueFt/dgcnn

Cuda memory

shersoni610 opened this issue · 4 comments

Hello,

I am trying to run pytorch example on 6GB Cuda card and I get the following message:

RuntimeError: CUDA out of memory. Tried to allocate 640.00 MiB (GPU 0; 5.94 GiB total capacity; 4.54 GiB already allocated; 415.44 MiB free; 143.32 MiB cached)

How can we run the examples on 6GB cards?

Thanks

@shersoni610 I also had the same problem.
My environment:
Win10(I had changed some code so I can use it in Win10),
One 1080Ti,
Anaconda py3.6,Cuda 9.0,CUDNN 7.5,Pytorch1.1

I solved this problem by set num_workers=0 of DataLoader() in pytorch/main.py .And I also had try to smaller the batch size of train.But in the end I still use 32 and it works.

Hi, @shersoni610 @zxczrx123

i'm having the same problem on a meager GT 1030 (RuntimeError: CUDA out of memory. Tried to allocate 80.00 MiB (GPU 0; 1.95 GiB total capacity; 947.23 MiB already allocated; 24.25 MiB free; 1.02 GiB reserved in total by PyTorch)

Changing the number of workers does not help. By the way, what's the point on setting them to zero?

Any help with changing the batch size?

Thank you

Setting the default value of test_batch_size argument in main.py from 16 to 8 worked for me