YvanYin/VNL_Monocular_Depth_Prediction

training with weighted cross entropy loss

Closed this issue · 5 comments

Hello, I try to use weighted cross entropy to train a baseline model which just formulate the depth prediction as a classification problem instead of regression. But the training loss couldn't converge a low value and the results are very bad. Could you please give me some advice? I would appreciate it if you could provide your loss function and training code to me.
This is my email: zjw18@mails.tsinghua.edu.cn

Hello,
I have the same issue. Could you please help me to figure out how to implement this loss function. Because I think it has to be different from cross entropy.

Thanks

Hi, you can read this paper "Deep attention-based classification network for robust depth prediction" to figure out the details of weighted cross-entropy loss. If you have other problems, you can list them here.

@YvanYin Thanks for your reply. I really appreciate it if you could share the loss function code as well. Thanks

The training code has already been released.

Hi there,

Thank you for sharing the training code! I noticed that in your paper, you quoted "Estimating depth from monocular images as classification using deep fully convolutional residual networks" [1] for your implementation for WCEL loss. However, the WCEL loss in [1] is different from the one in "Deep attention-based classification network for robust depth prediction"[2]. In [1], the authors suggested that they adjusted per pixel classification loss by H(p, gt) = exp[−a(p − gt)^2] to encourage the predicted depth labels that are closer to ground-truths to have higher contributions to the final loss. After looking your code I think this repo follows [2], which do not have such adjustments. Do you mind clarifying which version you used to get the result in the paper? Thanks!

Best,
Yunfan