Sparse annotations for training
Opened this issue · 2 comments
Is there currently a way to use sparse annotations for training, i.e. for pixel classification by ignoring all the pixels set to 0 in the target image during loss computation and only considering pixels set to a value >0? If that is not possible, what would be the minimal modification to the code to achieve this (at least for some, or ideally for all existing losses)?
Hello @SebastienTs, yes this is possible with the regular torch.nn.CrossEntropyLoss
by setting the ignore_index
parameter to the label ID that you want to ignore. Since 0 is often reserved for a background class I usually use other ID values. It just has to be the ID that is used in the label files; no changes to the model outputs are needed.
The DiceLoss
in elektronn3
doesn't support an ignore_index
option but you can use the weight
parameter to set the channel weight of the class that is to be ignored to 0.