Why do you freeze batch norm layer for fine-tuning?
Seyoung9304 opened this issue · 0 comments
Seyoung9304 commented
Hi, this is a very nice work! Thanks for your contribution.
In your code, I found that you freeze every batch normalization layer when you finetune the model.
if args.stage != 'chairs':
model.module.freeze_bn()
I wonder why you froze BN layer when you fine tune the model. Is there any theoretical or experimental reason for it?
Thank you in advance :)