LeapLabTHU/DAT

unalignment of classification result on imageNet

Mollylulu opened this issue · 3 comments

thanks for the contribution .
I trained the 224 x 224 imageNet classification model, while the acc has a gap between mine and yours. Hope there could be the pretrained model and related setting.
Thanks.

thanks for the contribution . I trained the 224 x 224 imageNet classification model, while the acc has a gap between mine and yours. Hope there could be the pretrained model and related setting. Thanks.

Hello,can you tell me what is the result of your training model?How much is the gap with the paper?Thank you.

@Mollylulu Thanks for your concerns, the pretrained weights are committed. I was trying to train the models in a smaller batch-size and observed a bit of performance drop (81.8 vs 82.0) in a single node (8x V100 or 3090 GPU) with 1024 batch-size or less. In fact, the results in the paper are trained under a batch size of 4096 with 32 A100 GPU for faster iterations of the experiments. Could you report your training setup?

It should be the batch_size and gpu version, sorry for the late reply.