DuaneNielsen/DeepInfomaxPytorch

Why do you fine-tune the encoder when training a classifier?

universome opened this issue · 1 comments

After we have trained our DIM encoder we should train just a classifier on top of the representations. However, in your implementation you optimize the whole model. But as far as I understand, we should use DIM's representations as is, i.e. without fine-tuning. What am I missing?

Sorry, apparently I am unable to read the code :|