MCG-NJU/VideoMAE

MoCoV3 Training Configuration

fmthoker opened this issue · 0 comments

Hi,
Thanks for releasing the code and this amazing work. I am using the MAE and MoCo-V3 baseline in my current work however I can't reproduce your results in Table 2 with Moco-V3 pre-train on UCF --> Finetune-UCF( 81.7 %). There are no implementations/configuration details about this setting in your paper, would it be possible to share how you pre-train MoCo-V3? I am also strictly following image-based MoCo-V3.
Details like batch size, learning rates, learning scheduling and number of GPUS would be helpful.
Thanks and looking forward to your reply.