jacobgil/pytorch-pruning

RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 3)

pgadosey opened this issue · 18 comments

Hi Jacob, I get this error when i run finetune.py --prune

Traceback (most recent call last):
File "fine_tune.py", line 271, in
fine_tuner.prune()
File "fine_tune.py", line 218, in prune
prune_targets = self.get_candidates_to_prune(num_filters_to_prune_per_iteration)
File "fine_tune.py", line 184, in get_candidates_to_prune
self.train_epoch(rank_filters = True)
File "fine_tune.py", line 179, in train_epoch
self.train_batch(optimizer, batch.cuda(), label.cuda(), rank_filters)
File "fine_tune.py", line 172, in train_batch
self.criterion(output, Variable(label)).backward()
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/init.py", line 98, in backward
variables, grad_variables, retain_graph)
File "fine_tune.py", line 77, in compute_rank
sum(dim=2).sum(dim=3)[0, :, 0, 0].data
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 476, in sum
return Sum.apply(self, dim, keepdim)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/_functions/reduce.py", line 21, in forward
return input.sum(dim)
RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 3)

I have not been able to figure out exactly what's causing the error

eeric commented

so did I

Hi I will check this and get back with an answer during the next few days.
What version of Pytorch are you using?

import torch
print torch.__version__

eeric commented

0.2.0_1, mine, failed

0.1.12_2, my friend, yet he succeeded.

mine is 0.2.0_1 as well...Does it have anything to do with the pytorch version?

keepdim in torch.sum has a default value of None, I changed it to True and that seems to have solved the issue

eeric commented

@pgadosey , so did, error as following,
Number of prunning iterations to reduce 67% filters 5
Ranking filters 0 times..
Layers that will be prunned {0: 2, 2: 5, 5: 6, 7: 9, 10: 26, 12: 21, 14: 20, 17: 70, 19: 60, 21: 55, 24: 64, 26: 67, 28: 107}
Prunning filters..
Traceback (most recent call last):
File "finetune.py", line 302, in
fine_tuner.prune()
File "finetune.py", line 233, in prune
model = prune_vgg16_conv_layer(model, layer_index, filter_index)
File "/home/yq/cnn/prune/pytorch-pruning/prune.py", line 33, in prune_vgg16_conv_layer
bias = conv.bias)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.py", line 250, in init
False, _pair(0), groups, bias)
File "/usr/local/lib/python2.7/dist-packages/torch/nn/modules/conv.py", line 34, in init
if bias:
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/variable.py", line 123, in bool
torch.typename(self.data) + " is ambiguous")

RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous

@eeric
I have encountered this problem, have you fix it? Thank you

eeric commented

@BUPTLdy , Not solved, and I sitll used torch version 0.1.12_2

@eeric Thanks for your reply, I solved it by setting bias = True in file prune.py.

eeric commented

ok, but for ResNet model, error setting

eeric commented

@BUPTLdy, setting bias = True in file prune.py, in detail?

@eeric
In lines 33 and 58 in file prune.py, change to bias = True.

eeric commented

@BUPTLdy, oh, but it occured error for vgg convolution layer inheritance, and What is more, ResNet

@eeric
In lines 33 and 58 in file prune.py, change to bias = True.

Why set bias = True in those two lines ?

@BUPTLdy

Edit:
Besides, modifying the code according to your advice does not solve the problem, I have another error came up. Do you have any idea to solve this out-of-memory issue ? I am thinking of reducing batch size or other parameters, but I am not sure where I should modify the code.

[phung@archlinux pytorch-pruning]$ python finetune.py --prune
/usr/lib/python3.7/site-packages/torchvision/transforms/transforms.py:187: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  warnings.warn("The use of the transforms.Scale transform is deprecated, " +
/usr/lib/python3.7/site-packages/torchvision/transforms/transforms.py:562: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
  warnings.warn("The use of the transforms.RandomSizedCrop transform is deprecated, " +
Accuracy:  0.5848
Number of prunning iterations to reduce 67% filters 5
Ranking filters.. 
Traceback (most recent call last):
  File "finetune.py", line 270, in <module>
    fine_tuner.prune()
  File "finetune.py", line 217, in prune
    prune_targets = self.get_candidates_to_prune(num_filters_to_prune_per_iteration)
  File "finetune.py", line 184, in get_candidates_to_prune
    self.train_epoch(rank_filters = True)
  File "finetune.py", line 179, in train_epoch
    self.train_batch(optimizer, batch.cuda(), label.cuda(), rank_filters)
  File "finetune.py", line 172, in train_batch
    self.criterion(output, Variable(label)).backward()
  File "/usr/lib/python3.7/site-packages/torch/tensor.py", line 96, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/usr/lib/python3.7/site-packages/torch/autograd/__init__.py", line 90, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: CUDA error: out of memory
[phung@archlinux pytorch-pruning]$ 
eeric commented

@ProMach
bias = True that bias was needed.

keepdim in torch.sum has a default value of None, I changed it to True and that seems to have solved the issue

I couldn't find the keepdim in torch.sum, Could you please share the line number?

@Anurag0212 I have done the modification at https://github.com/promach/pytorch-pruning/ . You could just use the repo. I am still trying to figure out how to test the model_prunned given some classes.

RuntimeError Traceback (most recent call last)
in ()
23 # convert output probabilities to predicted class
24 #_, pred = torch.max(output, 1)
---> 25 _, pred = torch.max(output, dim=1)
26 # compare predictions to true label
27 correct_tensor = pred.eq(target.data.view_as(pred))

RuntimeError: Dimension out of range (expected to be in range of [-1, 0], but got 1)