BloodAxe/pytorch-toolbelt

SoftCrossEntropyLoss error

somebodyus opened this issue · 2 comments

When I use the SoftCrossEntropyLoss, I got the error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Could anyone help me? BTW, what paper proposed the SoftCrossEntropyLoss?

Hi! Thanks for raising this issue. Indeed, builtin pytorch function had some in-place operations, which were causing this error. It should be fixed in master since 68945c9

Thank you for your update.
What does the "soft" mean? Could you give a technical reference for this loss?