Out of memory
HJLYU0519 opened this issue · 3 comments
Even if I set the training batch size to be 1, it still showed out of memory. And my GPU is GTX1080Ti which is enough for it.
Traceback (most recent call last):
File "/home/hongjin/PycharmProjects/Matrix-Capsules-EM-PyTorch-master/train.py", line 223, in
main()
File "/home/hongjin/PycharmProjects/Matrix-Capsules-EM-PyTorch-master/train.py", line 210, in main
best_acc = test(test_loader, model, criterion, device)
File "/home/hongjin/PycharmProjects/Matrix-Capsules-EM-PyTorch-master/train.py", line 175, in test
output = model(data)
File "/home/hongjin/anaconda3/envs/python3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/hongjin/PycharmProjects/Matrix-Capsules-EM-PyTorch-master/model/capsules.py", line 338, in forward
x = self.conv_caps1(x)
File "/home/hongjin/anaconda3/envs/python3.6/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/hongjin/PycharmProjects/Matrix-Capsules-EM-PyTorch-master/model/capsules.py", line 260, in forward
p_out, a_out = self.caps_em_routing(v, a_in, self.C, self.eps)
File "/home/hongjin/PycharmProjects/Matrix-Capsules-EM-PyTorch-master/model/capsules.py", line 182, in caps_em_routing
a_out, mu, sigma_sq = self.m_step(a_in, r, v, eps, b, B, C, psize)
File "/home/hongjin/PycharmProjects/Matrix-Capsules-EM-PyTorch-master/model/capsules.py", line 124, in m_step
sigma_sq = torch.sum(coeff * (v - mu)**2, dim=1, keepdim=True) + eps
RuntimeError: CUDA error: out of memory
Process finished with exit code 1
hi, have you try setting --test-batch-size
to smaller number? It seems the error thrown inside the test()
. The default test_batch_size
is 1000
.
Many thanks for your kind answer.
it works.
But I have another question: I think your code could not reach the test accuracy ( for the smallNORB dataset) in the original Matrix Capsule paper, right? Or say, I can not find an implementation which could reach the test accuracy in the original paper.
@HJLYU0519 please check out #5 to see related discussion.