inconsistent tensor size error
Closed this issue · 1 comments
russellfei commented
When I tried to replace inputs
and targets
with torch.Tensor
in train
module, luajit
returned this error at criterion:backward(output,targets[i]
.
I've checked that output
is a tensor while targets[i]
is a number, but the same code work just fine in train-on-cifar
.
Even thought in torch7 GoogleGroup's discussion, SMW said the repeatedly use of same instance of nonlinearity will lead to this error, I'm still confused about the cause and I don't think the nonliearity unit is the cause.
Can anyone help me? thanks~
russellfei commented
I've checked the doc file and figured out that.
Here just list some hints from the doc/criterion.md
for anyone who come across the same question:
- the
torch7
lib can use cuda to implement your code on GPU, but the assumption is that all your:cuda()
functions have corresponding.cu
files or functions in some scripts. - I've not found
criterion
has any relevant cuda functions, so there might be a misdirection intorch-tutorials/2_supervised/3_loss.lua: criterion:cuda()
criterion:backward(input,target)
expecttarget
as a number (1
to#class
), so feel free to use!