facebookresearch/co-tracker

About reproduction of training.

wwsource opened this issue · 1 comments

Hi, the work is excellent.
I am equipped with merely eight 24GB 3090 GPUs. I am utilizing the code from Cotracker1 for training, attempting to reproduce the performance under 'stride8 and window16'. I have substituted the 32 cards and 50K iterations with 8 cards and 200K iterations, yet the results on Davis first only achieved an OA less than 82 and a delta_avg less than 65. What could be the reason for this? How should I reproduce the performance of 'stride8 and window16' or even 'stride4 and window8' on 8 24GB 3090 GPUs? Many thanks!

Hi @wwsource, we haven't tried to train CoTracker2 on 8 GPUs.

Could you try to train the model on 8 GPUs for 50k or 100k iterations? You might need to adjust the learning rate as well.
Alternatively, training the old version on 8 GPUs should work: https://github.com/facebookresearch/co-tracker/tree/8d364031971f6b3efec945dd15c468a183e58212