ivalab/GraspKpNet

The dimension is wrong

Opened this issue · 3 comments

Traceback (most recent call last):
File "main.py", line 105, in
main(opt)
File "main.py", line 70, in main
log_dict_train, _ = trainer.train(epoch, train_loader)
File "/home/yusheng/code/GraspKpNet-main/src/lib/trains/base_trainer.py", line 126, in train
return self.run_epoch('train', epoch, data_loader)
File "/home/yusheng/code/GraspKpNet-main/src/lib/trains/base_trainer.py", line 74, in run_epoch
output, loss, loss_stats = model_with_loss(batch)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/yusheng/code/GraspKpNet-main/src/lib/trains/base_trainer.py", line 20, in forward
loss, loss_stats = self.loss(outputs, batch)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/yusheng/code/GraspKpNet-main/src/lib/trains/dbmctdet.py", line 68, in forward
lm_focal_loss = self.crit(output['lm'], batch['lm']) / opt.num_stacks
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/yusheng/code/GraspKpNet-main/src/lib/models/losses.py", line 284, in forward
return self.neg_loss(out, target)
File "/home/yusheng/code/GraspKpNet-main/src/lib/models/losses.py", line 166, in _neg_loss
pos_loss = torch.log(pred) * torch.pow(1 - pred, 2) * pos_inds
RuntimeError: The size of tensor a (128) must match the size of tensor b (64) at non-singleton dimension 3

Sorry for the late reply. I just corrected some bugs. The previous model was used for internal use. Please let me know if everything is ok right now.

OK, I will have a try. One more question, when I train the network based on the Cornell dataset, the network seems overfitting, but when I train the network on the AJD, that is not happend. Is there any tricks?

I think the main reason is that the original Cornell dataset is too small and even after augmentation, its variation is still limited. For prevent overfitting, I think early termination will help. I tried decreasing lr but it didn't help a lot.