Strange appearance of the loss curves
chinmay5 opened this issue · 4 comments
I am able to replicate the results but found something strange. I made a small tweak and also plotted the loss curves for the training and validation set. While doing so, I get a "strange" tuning-fork shaped loss. Although, the recall sum looks perfectly fine. Can you please tell me if this is the expected behavior for the same?
Your help shall be highly appreciated
That's an interesting curve. What kinds of tweaks did you make? Perhaps you could play with different learning rates?
That is the interesting part. I chose the same hyperparameter as mentioned in the paper and am using n_heads=1
in the calculation. The values for Recall sum and final results are in line with the numbers mentioned in the paper. It is just this loss function that seems really weird. It is harder for me to make sense of what actually is happening and why the results are in line when the loss values clearly shows that they should not be....
I assume the results are from COCO. Did you also try with the other two datasets and got similar behaviors? It's been a while since I worked on this project so my memory is thin, but in general, the triplet loss function is only surrogate to the ranking objective and it is possible that the loss curve looks different from the rsum curve (but I agree the loss above looks quite interesting).
Closing due to inactivity.