fcjian/PromptDet

Baseline training configs

hanoonaR opened this issue · 1 comments

Hi,

Thank you for sharing your work. I would to like know the training configurations used in your baseline reported in Table 2 in your paper. The implementation details in the paper specifies 1x schedule with lr of 0.02. However, the samples_per_gpu is set to 4 in the shared configuration,

However, the default training config in mmdet, for Mask-RCNN with FPN for 1x schedule is 8 GPUs and 2 samples per GPU, for effective batch size of 16, and lr of 0.02.

Could you please specify the the number of GPU's and the batch size and corresponding lr used in your baseline.

Thank you.

Thank you for your suggestion.

The baseline reported in Table 2 is trained with batch size of 16 (4 x 4) on 4 GPUs and lr of 0.02.