nianticlabs/manydepth

A tiny bug during training

ZhanyuGuo opened this issue · 0 comments

Hi, thanks for your great work!

I find a tiny bug which will influence the decay of learning rate: In the function train() in train.py, when epoch reaches freeze_teacher_epoch, it will reset the optimizer and lr_scheduler, which makes epoch reset to 0 in lr_scheduler's view.

I have proved that the lr will never decay in normal training, because step_size=15 and when epoch == 15, lr_scheduler is reset.

I fixed it and trained a new model under the same condition, getting the following results,

abs_rel sq_rel rmse rmse_log a1 a2 a3
KITTI_MR 0.098 0.770 4.459 0.176 0.900 0.965 0.983
NEW 0.100 0.755 4.423 0.178 0.899 0.964 0.983

It seems better in sq_rel and rmse.