funnyzhou/C2L_MICCAI2020

What's is the proper loss during training?

Closed this issue · 3 comments

For me, the training loss is about 11.5 in the beginning, and 9.5 in the ending, is it reasonable?

In our experiments, the loss value does not mean a lot. You have to finetune the checkpoint to check the effect of pretraining.

In the case of using 700k samples, how many epoches of training are needed to achieve the best performance?