Change of the evaluation results
Sunjuhyeong opened this issue · 2 comments
Hello!
I'm student who research the Deep Metric Learning. I have a short question about this repo. By the way, thank you for your interesting works !
Is it normal for the same evaluation metric value (Recall@k) to always come out, when the same code is executed multiple times?
Does the np.random in distanceweightedsampling
have some random effect?
I thought I wrote a code that would not interfere with learning, but it seems that the metric value changes every time I train it after the code changed. Specifically, the Recall@1 value seems to change to a width of about 0.01 to 0.015.
I am confused whether the code I wrote affects the training.
Indeed, that should usually be the case. If you want to change the "randomness" during training, simply set the --seed
to a different value, as this value sets the seed-scores for all random processes used.
Hope that helps!
I got it. Thanks for the answer!