torch/demos

inconsistent tensor size error

Closed this issue · 1 comments

When I tried to replace inputs and targets with torch.Tensor in train module, luajit returned this error at criterion:backward(output,targets[i].
I've checked that output is a tensor while targets[i] is a number, but the same code work just fine in train-on-cifar.
Even thought in torch7 GoogleGroup's discussion, SMW said the repeatedly use of same instance of nonlinearity will lead to this error, I'm still confused about the cause and I don't think the nonliearity unit is the cause.
Can anyone help me? thanks~

I've checked the doc file and figured out that.
Here just list some hints from the doc/criterion.md for anyone who come across the same question:

  1. the torch7 lib can use cuda to implement your code on GPU, but the assumption is that all your :cuda() functions have corresponding .cu files or functions in some scripts.
  2. I've not found criterion has any relevant cuda functions, so there might be a misdirection in torch-tutorials/2_supervised/3_loss.lua: criterion:cuda()
  3. criterion:backward(input,target) expect target as a number (1 to #class), so feel free to use!