How to train on kitti dataset?
lucasjinreal opened this issue · 5 comments
How to train on kitti?
To be specific, there 2 questions:
-
What does network input which is inside samples directory, it is a array of [64, 512, 5], but normally point cloud is 4 dimensions, where does the 5th dimension from?
-
How to generate lidar-2d data?
We provide a converted KITTI data that you can directly download.
The 5 dimensions include: x, y, z, intensity, mask, where the mask indicates if this is a missing pixel or not.
The 2d projection method is described in the SqueezeSegV1 paper: https://arxiv.org/abs/1710.07368
@BichenWuUCB thanks, still has one more question, the ouput is [64, 512], with cls id inside, does it means, network can only predict of an area cls pretty much like bev image bouding box? How to visual the result directly on point cloud with color points?
I mean, network can not preduce the height of object, does it?
@BichenWuUCB i wrote a code 2d projection method described in the SqueezeSegV1 paper, but there always have lines in the middle of the image, why is that? are you use parameter v_res=26.9/64, h_res=0.17578125?
May I ask how is the mask data obtained?
We provide a converted KITTI data that you can directly download.
The 5 dimensions include: x, y, z, intensity, mask, where the mask indicates if this is a missing pixel or not.
The 2d projection method is described in the SqueezeSegV1 paper: https://arxiv.org/abs/1710.07368