chaofengc/Face-SPARNet

Training epochs

BounharAbdelaziz opened this issue · 2 comments

Hi, I wanna first thank you for sharing such project!

I wonder if you need to add +1 in the training loop (train.py line 27). In fact as stated in the paper, SPARNet-HD was trained over 10 epochs, which is something we can see in the train.sh file (--total_epochs 10). I think you do it that way, it will train over 11 epochs, I wonder if you saw it before (not sure though since you saw the pcp loss bug)

Can you please confirm that? Many thanks in advance!

With best,

Thank you for interest.

I did not notice that, the epoch starts from one at the beginning, so there is +1 in line 27. Later I change the start epoch to 0, but forgot to remove the +1. We thank for you pointing out this.

In fact, although the network was trained over 10 epochs in the paper. We found there is no need to do that when we reproduce the results. All the released SPARNetHD model was trained from scratch until the program stopped, i.e. 11 epochs.

Since I do not have time to retrain the model again and one more epoch makes no big difference. I will leave the code as it is now.

Thank you very much for your instant reply !