zgojcic/3DSmoothNet

the bad result of registration

tracykim opened this issue · 9 comments

hi,zan!
Thanks for your contribution first.

I can run demo perfectly,however when I use my data to test,the result is bad at follows
registration::RegistrationResult with fitness = 0.000000, inlier_rmse = 0.000000, and correspondence_set size of 0

I don't show it was the descriptors by network or registration of open3d.

Thanks so mush!
kim.

Hi Kim,

could you maybe provide a bit more information, what do the point clouds depict, what are the resolution and overlap?

Please note that you have to adjust the parameters (the voxel size) to the characteristic of your point clouds. In demo, we also sample only a small amount of the "keypoints", you could try to increase the number of points and if the scene is more challenging maybe use a higher dimensional descriptor (we provide pretrained weights for 64 and 128 dim descriptors).

Let me know if I can somehow help you with your problems.

Best
Zan

Hi,thanks for your reply.For sorry I didn't elaborate on the data

one pointcloud depict outdoor buildings and roads,the other is part of them.

I try to increase supportRadius to 0.6,take 20000 random keypoints,and use 128 dim model you provide.however it didn't work

For more details,I had send my point clouds to your email.Thanks again.
Kim.

Hi Kim,

I had a look at the point clouds and have some remarks/ideas why it probably does not work

  1. The point clouds are very sparse especially the point cloud of the "submap" has a resolution of at best 0.10 cm close to the scanner and is in meter so the voxel size would have to be even much larger (we suggest at least around 15times the resolution).

  2. There seems to be a scale difference between the two point clouds? Or maybe this was just my misinterpretation (due to the metric definition of the voxel size, 3DSmoothNet is not capable of aligning point clouds with different scale (at least not "out of the box"))

  3. The primary intention of 3DSmoothNet is not "retrieval" where one has a point cloud covering a large extent ('map') and tries to retrieve the position based on much smaller local point cloud. In this case the coverage difference does not seem to be extreme so this should probably not be a problem but otherwise there are works that specialize on cases like this e.g.: http://openaccess.thecvf.com/content_cvpr_2018/papers/Uy_PointNetVLAD_Deep_Point_CVPR_2018_paper.pdf

Best
Zan

Hi,Zan.Thanks for your answer,it is so cool.

1.Yes,my point clouds are sparse may cause problem.

2.It is not your misinterpretation,I want to align from global map and local point cloud

3.Maybe segmentation of large maps is a try,thanks for your paper suggestions at last.

have a nice day,thank you!
Kim

Hi Kim,
the sparsity might be the main problem yes you could try with a much larger voxel size like around 1.5 to 2 m.

With the scale difference I did not have the "extent" of the point cloud in mind i.e. one covers a much larger area than the other. What I meant was that in one of the point clouds is not true to scale (1 m in one point cloud is not the same distance as 1 m in the other)

hi,zan!
Thanks for your contribution first.

I can run demo perfectly,however when I use my data to test,the result is bad at follows
registration::RegistrationResult with fitness = 0.000000, inlier_rmse = 0.000000, and correspondence_set size of 0

I don't show it was the descriptors by network or registration of open3d.

Thanks so mush!
kim.

An nyeong kim,may i know which program language were you used to run the demo?i`m a newer just get start to this ,thank you !

An nyeong kim,may i know which program language were you used to run the demo?i`m a newer just get start to this ,thank you !

hi,you can run the demo with python,the contents of README are very detailed,you should see it.

Notice,the path of 'tfrecord' in ./core/network.py:76 missing a '/'.

I am closing this due to inactivity, if you have some other problems/questions please open a new issue.