JUGGHM/PENet_ICRA2021

How to use the sparse depth?

Pattern6 opened this issue · 4 comments

Thank you for your outstanding contribution!
I want to know how the Color-dominant Branch is combined with the point cloud and sent to the network. Does the radar point cloud only take effective points, and does the color image also take the same effective points as the point cloud? How can we get a dense depth map in this way?
This question has been bothering me. I hope you can answer it for me. Thank you again!

Thanks for your interest! The point clouds have been projected into an image plane so they could be regarded as a depth image. All (not only "effective" points) pixels of the color image and the sparse depth map are sent into the network.

Thanks for your interest! The point clouds have been projected into an image plane so they could be regarded as a depth image. All (not only "effective" points) pixels of the color image and the sparse depth map are sent into the network.
Thank you for your reply!
But isn't there some places where there is no radar point cloud? Does this not affect the convolution in the network?

Thanks for your interest! The point clouds have been projected into an image plane so they could be regarded as a depth image. All (not only "effective" points) pixels of the color image and the sparse depth map are sent into the network.
Thank you for your reply!
But isn't there some places where there is no radar point cloud? Does this not affect the convolution in the network?

Actually it does not. You might have known about special strategies such as sparsity invariant convolutions. But according to our experiment and analysis, vanilla 2d convolution is enough for acceptable predictions.

Thank you for your contribution!
when implementing the model in ros with KITTI ROSbag running, how to change the pointcloud2 msgs to proper input format?
how to set the range of the depth input with the physic distance ?