ildoonet/pytorch-randaugment

SmoothCrossEntropyLoss

jaemjaem opened this issue · 0 comments


1. class SmoothCrossEntropyLoss(Module):
2.     def __init__(self, label_smoothing=0.0, size_average=True):
3.         super().__init__()
4.         self.label_smoothing = label_smoothing
5.         self.size_average = size_average
6. 
7.     def forward(self, input, target):
8.         if len(target.size()) == 1:
9.             target = torch.nn.functional.one_hot(target, num_classes=input.size(-1))
10.             target = target.float().cuda()
11.         if self.label_smoothing > 0.0:
12.             s_by_c = self.label_smoothing / len(input[0])
13.             smooth = torch.zeros_like(target)
14.             smooth = smooth + s_by_c
15.             target = target * (1. - s_by_c) + smooth
16. 
17.         return cross_entropy(input, target, self.size_average)

It seems that the label smoothing I know is not done.

(Based on 7 num classes) Line 15 output print:

[1.0000, 0.0143, 0.0143, 0.0143, 0.0143, 0.0143, 0.0143]

label smoothing formula is:

y_ls = y_k * (1 - a) + a / K

but 15 line is:

y_ls = y_k * (1 - a / K) + a / K

correct code and result:

15. target = target * (1. - self.label_smoothing) + smooth

[0.9143, 0.0143, 0.0143, 0.0143, 0.0143, 0.0143, 0.0143]

Maybe I'm stupidly misunderstood?