The Ap of batchsize 16 in one gpu
Opened this issue · 1 comments
ray-peng commented
excuse me , i have retrained the model in one gpu with 16 batchsize, but i got the lower ap than paper. i have read the issue similar with my probelm. But i don't know how to adjust the learning rate, can you give me some advice? #5
SPengLiang commented
Sorry for the late reply. You can increase the learning rate and the training epochs, and the learning rate decay strategy should be adjusted correspondingly.