PRBonn/lidar-bonnetal

How to use pre-trained model to test my own data ?

he-guo opened this issue · 8 comments

Hi , everyone !
I want to perform semantic segmentation on the data Points_Cloud2 I obtained from VLP-16, and then mark the original data. What should I do?
Any suggestions are greatly appreciated .

Hi @he-guo,

to use a different sensor, you have to modify the projection from 3D point cloud to the range image in the architecture configuration, see https://github.com/PRBonn/lidar-bonnetal/blob/master/train/tasks/semantic/config/arch/darknet53.yaml

sensor:
name: "HDL64"
type: "spherical" # projective
fov_up: 3
fov_down: -25
img_prop:
width: 2048
height: 64

There the values for fov_up and fov_down must be modified. (the name is not used, as far as I know.) The size of the resulting range image should also be modified, i.e., width and height (at least the height to get a dense range image).

In case of a velodyne VLP-16 these values should work (but I did not test these values):

   fov_up: 15 
   fov_down: -15  
   img_prop: 
     width: 2048 
     height: 16 

The width might also be 1024 or 512.

The resulting range image should look "dense", i.e., there should ideally no gaps between the pixels. The projection method currently uses a regular vertical angle.

Thank you very much for your advice. I'm trying

I think this should be solved. If you still have doubts, then please re-open the issue.

非常感谢您的建议。我正在努力

I'm so sorry, Please have you tried VLP-32?

There the values for fov_up and fov_down must be modified. (the name is not used, as far as I know.) The size of the resulting range image should also be modified, i.e., width and height (at least the height to get a dense range image).

In case of a velodyne VLP-16 these values should work (but I did not test these values):

   fov_up: 15 
   fov_down: -15  
   img_prop: 
     width: 2048 
     height: 16 

The width might also be 1024 or 512.

The resulting range image should look "dense", i.e., there should ideally no gaps between the pixels. The projection method currently uses a regular vertical angle.

@jbehley Thanks for your answer above. I would like to know how to calculate fov_up and fov_down. How did you come up with 15 and -15 respectively? Similarly, why 3 and -25 in your original work? I understand from the paper that f = fov_up + fov_down. But should we calculate this based on our own sensor?

Thanks

These are the field-of-view from the specification or covering the "opening angles" of the sensor. The Velodyne HDL-64E has an asymmetric field of view. And these are values taken from your sensor. If you want to use the pre-trained model it will probably not work well.

@jbehley Hi, I've been trying to use the network on values taken from my lidar (not velodyne). This lidar's horizontal view angle is only 120°. And the range image projected from my point cloud is wield. I have modified fov, fdown as well as width. It didn't work.
Any idea? Thanks very well.
微信截图_20210325153031

in our projection, we assume that the lidar gives a full 360* view. Therefore you have to account for this if your LiDAR provides only 120 degree.