zgojcic/3DSmoothNet

Apply this method to 3D template matching?

murrdpirate opened this issue · 2 comments

I'm interested in applying this technique to 3D template matching, found in datasets such as LineMOD. Basically, given a 3D model of an object and a point cloud of an environment containing the model, find the pose of the model.

I figured I could convert the 3D model into a point cloud and use 3DSmoothNet to 'register' it with the environment point cloud. This doesn't immediately work, but I'm not sure I'm properly adapting the data to 3DSmoothNet or using appropriate settings. Just curious if there are any suggestions on making this work, or if the problem is just too different (e.g. due to object and environment point clouds being much different in size).

Hi,

in principle, it should work, provided that the extent of scenes is not too big. As we randomly sample points, as the scene gets large only few point will be on the object. You probably also have to choose the voxel size or the size of neighborhood wisely such that on the one hand you have enough context, and on the other hand the neighborhood is not too large so that you do not have too much clutter in it.

It might also require retraining the model as the current model was trained on scenes that do not have clutter.

If you continue working on this and need some help let me know. I would also be happy if you could share your findings with me, as 3D template matching is something that we have never tried.

Thanks! I do expect to be working on this for some time. So I'll start making some adjustment to voxel sizes, etc. and do some training. Will definitely let you know how it goes. I greatly appreciate your feedback and offer to help.

Closing this issue for now.