msminhas93/DeepLabv3FineTuning

Frozen layers

guptavasu1213 opened this issue · 3 comments

Did you freeze any layers in your implementation or does your model get retrained and uses the starting weights from the DeepLabv3?

Yes, it appears that no layers have been frozen. (requires_grad) for the initial layers doesn't seem to be set to False anywhere. By default, params.requires_grad takes True value. @msminhas93 could you please confirm?

Yes @rohansinghjain you are right, there are no frozen layers in this implementation.

Hi @msminhas93 , I don't understand. Why aren't there frozen layers if you claim (in your tutorial) that you will use the weights from the original DeepLabv3 implementation? Shouldn't you freeze the initial layers of the networks and try to train (slowly) the rest on another task?

Thanks for your work and your time in answering my question!