bethgelab/siamese-mask-rcnn

Multiple runs on training not evaluation

MSiam opened this issue · 2 comments

MSiam commented

Hi Thanks for the great work

I have a certain question regarding the results reported. In the paper you did mention you are performing 5 runs for the evaluation. So my question is did you try to also test the randomness from the training procedure itself since it is a stochastic process itself. You are randomly sampling the support set and query image so you would expect some variability. I am wondering if you reached stable results or not for the training itself even with different random seeds?

Thanks

Hi @MSiam we did not explicitly test this. In my experience the differences are small but I did never try to quantify this. In general the stochasticity during training is smooted by the fact, that we train for 160k steps with random references at each step. The remaining component is the randomness of the initialization, which is the same for all networks and has small effects for standard Mask R-CNN models.

MSiam commented

Thanks for your reply