can I use [-1,1] custom dataset organized like PCN dataset to train your Pointr model on that?
wra187611 opened this issue · 11 comments
that is beacuse I want to use pointr on incomplete point cloud in reality that don't have gt.
I also trid to make custom dataset just like PCN dataset, normalized at [-0.5,0.5], but when I use the pretrained model on real point clouds, I managed to make the tested point cloud noramlized at [-0.5,0.5],but they have a lot of difference in space from the train data. Is it because of the bbox or whatever?
Hope for your reply!
Hi, can you show me some cases in your application. The data distribution gap may cause an unsatisfactory performance. So we recommend you can finetune the pretrained models on your real-application solution with a small set data (just like we test on KITTI).
I have a synthesize modelset on human's body, I normalized them at [-0.5,0.5] and crop them to make partial data by my way,but the real partial point cloud is not in the same space(after normalized at [-0.5,0.5]) as for application, can you give me some advice?
Human's body is totally different from the training samples in PCN or ShapeNet. You should train a PoinTr from scratch. Maybe you can try to train an AdaPoinTr on NTU-120-RGBD dataset. Just turn depth map into point cloud and produce the partial input in your way. Then you can try this model in your situation.
Thanks,I know it's not like PCN or ShapeNet, but in my opinion to make the dataset format like PCN it's easier than train on a new dataset like NTU-120-RGBD.
The problem is I don't have the gt for the data I test, I only have synthesize humanset called SHERC14, I first normalized the complete human point cloud at [-0.5,0.5] then I crop, I make them as one category--body, it's training well on Pointr on PCN_models through I saw on tensorboard, however, apply to real partial point clouds, I just use your inference.py to output a complete result then I realize,the input need to be normalized, I normalized it, but it's differnt from my train data, so the result is not too well. Take a teeth model for exmple,the yellow one is training data ,the dark one is real data.Now I'm trying to crop first and normalized the partial and apply the scale parameters to complete point cloud and train again, please let me know if it's ok or tell me am I wrong.
crop first and normalized the partial
I think it will make the model unstable during the training since the scale is highly-conditioned on the cropped part. For one sample in the training set, the inputs will be totally different in aspect of scale with two different cropping direction.
I think it may be a more promising way to add a random_scale augmentation during the training, while keeping the normalize-crop style. In this way, the model can be improved in the robustness.
see (here https://github.com/yuxumin/PoinTr/blob/master/utils/misc.py#L270)
OK, I'll try!
Thanks!
Thanks for your reply.
Is it only used for KITTI finetune?
partial, gt = misc.random_scale(partial, gt) # specially for KITTI finetune
Thanks for your reply. Is it only used for KITTI finetune?
partial, gt = misc.random_scale(partial, gt) # specially for KITTI finetune
@wra187611 Hi, I am also training my own teeth data on pointr and I have the same problems as yours. Could you tell me whether you solved this problem?
Hi,yes, this is only used for KITTI finetuning for a robust performance when deals with the partial lidar input