JUGGHM/PENet_ICRA2021

A rather common but vital problem about confidence map

KirkZhengYW opened this issue · 3 comments

Thank you for your good work!
I notice that in both CD-branch and DD-branch, confidence maps(concatenated with CD-depth and DD-depth respectively) are generated by the last convolutional layer, which are comprized of ordinary conv+bn+relu layer.

self.rgb_decoder_output = deconvbnrelu(in_channels=32, out_channels=2, kernel_size=3, stride=1, padding=1, output_padding=0)

rgb_output = self.rgb_decoder_output(rgb_feature0_plus)
rgb_depth = rgb_output[:, 0:1, :, :]
rgb_conf = rgb_output[:, 1:2, :, :]
self.decoder_layer6 = convbnrelu(in_channels=32, out_channels=2, kernel_size=3, stride=1, padding=1)

depth_output = self.decoder_layer6(decoder_feature5)
d_depth, d_conf = torch.chunk(depth_output, 2, dim=1)
rgb_conf, d_conf = torch.chunk(self.softmax(torch.cat((rgb_conf, d_conf), dim=1)), 2, dim=1)

I wonder how convbnrelu layer can output a neat confidence map and a depth map without confidence supervision. Would you please provide me with some relavant works or papers to see? Thanks.

Thanks for your interest! In this work, the confidence maps are neat because we impose intermediate supervision on their corresponding depth maps respectively, leading to implicit constraints for confidence.

Thanks for your interest! In this work, the confidence maps are neat because we impose intermediate supervision on their corresponding depth maps respectively, leading to implicit constraints for confidence.

Thank you for your reply! Does intermediate supervision requires a ground truth confidence in corresponding scale?

Thanks for your interest! In this work, the confidence maps are neat because we impose intermediate supervision on their corresponding depth maps respectively, leading to implicit constraints for confidence.

Thank you for your reply! Does intermediate supervision requires a ground truth confidence in corresponding scale?

No, only ground truth depth maps are required. Meanwhile the sum of exponential confidence is regularized to one by the softmax function.