Normalize the gt template
Opened this issue · 1 comments
HRHLALALA commented
Hi, thanks for your excellent work!
I notice that your template is within (0,0.0099) and you do not normalize the template to (0,1) though there is such a choice. Would this affect the sigmoid function and BCE? Because usually, we use binary labels for this. May I ask any reason behind that?
Just want to make sure I am not missing important details! It will be helpful if you can answer these. Thanks!
ArcaneEmergence commented
The template reflects a probability distribution over positions, so everything should sum up to 1. As you mentioned, usually BCE is used for binary labels, but it can also be used to measure a distribution to another, as negative log-likelihood does.
https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html