DingXiaoH/RepLKNet-pytorch

Pre-trained weights of RepLKNet-13 for visualizing the ERF

ShunLu91 opened this issue · 5 comments

Thanks for your nice work and could you please upload the pre-trained weights of the RepLKNet-13 for us to visualize the ERF?

Best wishes and great thanks!

Done. Updated the README:

To reproduced the results in the paper, please download the RepLKNet-13 (Google Drive, Baidu) and RepLKNet-31 (Google Drive, Baidu) models trained in 120 epochs.

Thank you very much for your timely reply.

Based on your provided code and pre-trained weights, I visualize the ERF of ResNet-101, ResNet-152 and RepLKNet-31.
When I loaded the pre-trained weights, the ERF is similar to the figure in your paper.
However, when they were visualized without pre-trained weights, it appeared a distinct phenomenon that ResNet-101 and ResNet-152 got much larger ERF than RepLKNet-31.
According to the above observations, I wonder if we can conclude that these networks naturally have a large receptive field but their final distinct ERF is caused by the training rather than the natural ability.
image

Best regards,
Shun Lu

Thank you for sharing the results and thoughts. We also found out that the ERF of RepLKNet expands as the training proceeds. Though the ERF visualized at the very beginning indeed differs from a well-trained model, it is worth considering whether the "receptive field" without training makes sense. I mean, since the model cannot make reasonable predictions before the training begins, the contribution matrix seems not a reasonable metric for the pixel contributions.

OK, thanks again for your response and the great contribution to the community.
I will keep following dalao's work.