Engineering-Course/LIP_JPPNet

training and inference is using same pretrained weight?

Closed this issue · 1 comments

Hi, I observe that when running training code, you are loading model that you have alrady trained on LIP dataset (which is same model you used for inference).
restore_var = all_saver_var #[v for v in all_saver_var if 'pose' not in v.name and 'parsing' not in v.name]
I guess you forget to uncommon this line of code in train_JPPNet-s2.py?

The resore_var can be modified according to the used model.