Evaluating the model
testingshanu opened this issue · 2 comments
Hi,
Do you plan to provide the scripts to validate the final model?
Also in the paper I observed that evaluation metrics were mainly provided for semantic segmentation. I was curious to know if there was a comparison between the results of depth obtained from your network and "Digging into self-supervised monocular
depth estimation".
While saving the model, I get an error at https://github.com/lhoyer/improving_segmentation_with_selfsupervised_depth/blob/master/train.py#L362.
The self.ema_model is not none as it enters this line -
Could you please provide the necessary changes to fix this issue ?
Hi,
The validation of the model is already part of the tensorboard logging. We do not use a separate validation script. We don't evaluate the depth estimation performance as we focus on semantic segmentation in this project.
If you want to store the model for inference, you can just remove the NotImplementedError. However, resuming training from a checkpoint is not supported.