Guide for Noobs
NiluK opened this issue · 6 comments
Hi folks this is an amazing paper. Thank you so much for this.
It would be wonderful if there was a noob guide on how to set this up in your local machine /ec2 instance.
Specially all I want to do is input a sparse point cloud - either in npz or in py format and get a completed point cloud
I have this point cloud:
https://qndkppmmrjbidgjntwgt.supabase.co/storage/v1/object/public/3d-models/items/test/55c820a2-e5d3-455a-b678-8e76e2.ply that I generated from this point-e via this image:
It's ok but as you can see from the surface reconstruction algorithm, (I used GenSDF (https://arxiv.org/pdf/2206.02780.pdf)) : it's not quite there to give a smooth performance as there significant point clouds missing: https://qndkppmmrjbidgjntwgt.supabase.co/storage/v1/object/public/3d-models/items/336d58fe-93f8-11ed-935b-f9279e04a467/reconstruct.ply
I think this paper could help a lot with it with the point cloud completions, but I have some difficulty running it.
Is there an easy way to run inference on this with a pretrained model? A guide for noobs would be very much appreicated!
Basic expectation - run python script that takes in a .ply and spits out a completed .ply
Thanks
Hi, thanks for your interest.
I agree that it would be helpful if there are some simple inference code for completion.
I will write a simple api and upload it under tools
Best!
Thanks so much, I don't know if it works yet but I had some trouble getting the install working freshly. I will create a PR with my change suggestions
I have created a PR with the installation changes here
@NiluK I have tried the sample you uploaded under this issue. I upload the input and the prediction result as below:
(Using pretrained models on PCN benchmark)
The input of your provided sample:
Inference result of PCN pretrained weight AdaPoinTr:
Inference result Using pretrained models on ShapeNet55 benchmark, the pretrained weight see here, python tools/inference.py cfgs/ShapeNet55_models/AdaPoinTr.yaml .ckpts/AdaPoinTr_s55.pth --pc demo/input.ply --save_vis_img --out_pc_root inference_result
So, it highly depends on the training pair for the completion model. (PCN: adopting back-projecting for the input data, see it in Fig.3 here, ShapeNet-55: adopting cropping for the input data)
Thanks for your contribution to this repo.
Best.
Hi yuxmin, ShapeNet-55 works excellently. Cheers