Concern about the performance
Opened this issue · 4 comments
yarkable commented
Hi, thanks for your great work. Following your supplementary materials, I use RetinaNet-ResNet101 to distill RetinaNet-ResNet50, and I plot mAP curve as follows, which can be seen that performance of KD method is higher than student, but it raise slowly in the later training process, and nearly no gain compared to student model. I wonder if it's a common phenomenon? Or there is sth wrong?
ArchipLab-LinfengZhang commented
Hi yarkable
Maybe you can try train detectors for more epochs. Usually, I find that KD performs much better for 2X times training (24 epochs).
yarkable commented
Yep, I back to see your paper, and it seems that you use 2x schedule in all your experiments. Is it a common practice for distillation in object detection?