eric-yyjau/pytorch-superpoint

Binary cross entropy loss

Closed this issue · 4 comments

Hi,

Thanks for this great repo.
I have a question about the detector loss. I noticed that you used binary cross entropy loss while diving the labels of the interest point by the norm of each bin. In the paper and in the tensorflow implementation they used cross entropy loss while choosing randomly one of the interest point in each bin.
Can you explain why did you use it?

Hi @talshef,

Sorry for the late reply. Could you elaborate a bit more of 'diving the labels of the interest point'?
Or possibly point out the lines you're referring to. Thanks!

dividing*:
in the function labels2Dto3D(labels, cell_size, add_dustbin=True)

        ## norm
        dn = labels.sum(dim=1)
        labels = labels.div(torch.unsqueeze(dn, 1))

anyway, i saw that in the paper and in the tensorflow version they used multi class cross entropy while you used binary cross entropy. why did you used the binary version?

if you're referring to the keypoints, binary cross entropy should work fine. if you understand the concept behind the output, it just makes sense.

Hi @talshef,

For the dividing, it is meant to normalize the prediction into probability, so that the cross-entropy can work.
The dust bin is assigned to be 1 if that 64-dim bin includes no keypoint.
Thanks.