BloodAxe/pytorch-toolbelt

Dice loss is smaller when computed on entire batch

Opened this issue ยท 0 comments

๐Ÿ› Bug

I noticed that when I compute the dice loss on an entire batch the loss is smaller than computing it singularly for each sample and then averaging it. Is this behavior intended?

Expected behavior

Dice loss on batch equivalent to average of dice losses

Environment

Using loss from segmentation_models_pytorch