Simon4Yan/Learning-via-Translation

Direct Transfer results

Opened this issue · 6 comments

 Hi, I have some questions about your 'Direct Transfer' results in Table 2 in your paper. I make the setting consistent with yours, but I can't get so-high baselines, like rank1 33.1%, mAP 16.7  when train on market1501 and test on Duke-MTMC-reID. Actually I always got this result around rank1 27%, mAP 13%. And this result is better than those in the [https://arxiv.org/pdf/1705.10444.pdf](url), where this result is that rank1 21.9%, mAP 10.9%. 
 I wonder that is there some tricks used in your experiments, and I'm looking forward to your reply .

@liangbh6 @Simon4Yan Which framework you use? Caffe or pytorch?

@Simon4john pytorch. So, the reason is the differences between pytorch and caffe? If I want to reproduce your results using pytorch, do you have some suggestion, about the learning rate, data augmentation, or testing tricks like normalization? Actually I have tried to normalized the features but it helped a little.

@liangbh6 @Simon4john Thanks for your attention. The code for re-ID feature learning is mainly modified from IDE, and the framework is Caffe.

Thanks for your question. And we conduct experiment to see that difference between pytroch and caffe, we find the BN leads to the this performance gap. I will give the experiment details about it after I come back to school.

With the help of Houjing Huang (his homepage is here ), I find the performance gap on pytorch and caffe is caused by BN.

I give huang's experiments here:

whether you set BN layer to train or eval mode during training. The eval mode for BN layer during training, corresponding to Caffe's batch_norm_param {use_global_stats: true}, means using ImageNet BN mean and variance during training.

We train models using pytorch, and the settings are the same with caffe.

(1) When setting BN layer to train mode during training and eval mode during testing, the scores are as follows:

  • Market1501->Market1501 [mAP: 58.13%], [cmc1: 78.95%]
  • Market1501->Duke [mAP: 11.55%], [cmc1: 21.99%]

(2) When setting BN layer to eval mode during training and eval mode during testing, the scores are as follows:

  • Market1501->Market1501 [mAP: 52.38%], [cmc1: 76.31%]
  • Market1501->Duke [mAP: 16.68%], [cmc1: 31.82%]

Therefore, we believe that BN is the key factor to the performance gap between caffe and pytorch.

@Simon4Yan Excellent work! Thanks a lot.