ArchipLab-LinfengZhang/Object-Detection-Knowledge-Distillation-ICLR2021

Concern about the performance

Opened this issue · 4 comments

Hi, thanks for your great work. Following your supplementary materials, I use RetinaNet-ResNet101 to distill RetinaNet-ResNet50, and I plot mAP curve as follows, which can be seen that performance of KD method is higher than student, but it raise slowly in the later training process, and nearly no gain compared to student model. I wonder if it's a common phenomenon? Or there is sth wrong?

图片

Hi yarkable
Maybe you can try train detectors for more epochs. Usually, I find that KD performs much better for 2X times training (24 epochs).

Yep, I back to see your paper, and it seems that you use 2x schedule in all your experiments. Is it a common practice for distillation in object detection?

lji72 commented

@yarkable Do you get the expected gain with 2X times training?

@lji72 Haven't tried yet.