How should I train my dataset?
wyaimyj opened this issue · 3 comments
Hello, I would like to use your model to train on my own dataset. My dataset consists of point clouds from different viewpoints of a statue. However, I do not have the corresponding rotation and translation matrices between the point clouds pairwise. How can I address this issue? Thank you!
Hi @wyaimyj,
Thanks for your interest!
I think several point clouds of a single statue is not enough for training deep descriptors. I suggest to directly use pairwise registration models such as Geotrainsformer pre-trained on object-level datasets such as ModelNet40 to solve the pairwise registrations.
And adopt SGHR's transformation synchronization section to solve the global consistent scan poses.
Yours,
Hi @wyaimyj, Thanks for your interest! I think several point clouds of a single statue is not enough for training deep descriptors. I suggest to directly use pairwise registration models such as Geotrainsformer pre-trained on object-level datasets such as ModelNet40 to solve the pairwise registrations. And adopt SGHR's transformation synchronization section to solve the global consistent scan poses.
Yours,
Thank you for your response. I would also like to inquire how the rotation and translation matrices corresponding to each point cloud, as used in datasets like 3dMatch, are obtained?
Hi @wyaimyj,
Sorry for the late reply for I am working on something else!
The ground truth scan poses are recorded with sensors like IMU when scanning the raw RGB-D data of 3DMatch (3DMatch comes from many datasets), which can be found at https://3dmatch.cs.princeton.edu/.
The solved scan poses come from the pose synchronization part of SGHR.
Yours,