Junsheng Zhou* · Baorui Ma* · Shujuan Li · Yu-Shen Liu · Zhizhong Han
(* Equal Contribution)
We release the code of the paper Learning a More Continuous Zero Level Set in Unsigned Distance Fields through Level Set Projection in this repository.
Our code is implemented in Python 3.8, PyTorch 1.11.0 and CUDA 11.3.
- Install python Dependencies
conda create -n levelsetudf python=3.8
conda activate levelsetudf
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch
pip install tqdm pyhocon==0.3.57 trimesh PyMCubes scipy point_cloud_utils==0.29.7
- Compile C++ extensions
cd extensions/chamfer_dist
python setup.py install
For a quick start, you can train our LevelSetUDF to reconstruct surfaces from a single point cloud as:
python run.py --gpu 0 --conf confs/object.conf --dataname demo_car --dir demo_car
- We provide the data for a demo car in the
./data
folder for a quick start on LevelSetUDF.
You can find the outputs in the ./outs
folder:
│outs/
├──demo_car/
│ ├── mesh
│ ├── densepoints
│ ├── normal
- The reconstructed meshes are saved in the
mesh
folder - The upsampled dense point clouds are saved in the
densepoints
folder - The estimated normals for the point cloud are saved in the
normal
folder
We also provide the instructions for training your own data in the following.
First, you should put your own data to the ./data/input
folder. The datasets is organised as follows:
│data/
│── input
│ ├── (dataname).ply/xyz/npy
We support the point cloud data format of .ply
, .xyz
and .npy
To train your own data, simply run:
python run.py --gpu 0 --conf confs/object.conf --dataname (dataname) --dir (dataname)
-
For achieving better performances on point clouds of different complexity, the weights for the losses should be adjusted. For example, we provide two configs in the
./conf
folder, i.e.,object.conf
andscene.conf
. If you are reconstructing large scale scenes, thescene.conf
is recomended, otherwise, theobject.conf
should work fine for object-level reconstructions. -
In different datasets or your own data, because of the variation in point cloud density, this hyperparameter scale has a very strong influence on the final result, which controls the distance between the query points and the point cloud. So if you want to get better results, you should adjust this parameter. We give
0.25 * np.sqrt(POINT_NUM_GT / 20000)
here as a reference value, and this value can be used for most object-level reconstructions.
Please also check out the following works that inspire us a lot:
- Junsheng Zhou et al. - Learning consistency-aware unsigned distance functions progressively from raw point clouds. (NeurIPS2022)
- Baorui Ma et al. - Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces (ICML2021)
- Baorui Ma et al. - Surface Reconstruction from Point Clouds by Learning Predictive Context Priors (CVPR2022)
- Baorui Ma et al. - Reconstructing Surfaces for Sparse Point Clouds with On-Surface Priors (CVPR2022)
If you find our code or paper useful, please consider citing
@inproceedings{zhou2023levelset,
title={Learning a More Continuous Zero Level Set in Unsigned Distance Fields through Level Set Projection},
author={Zhou, Junsheng and Ma, Baorui and Li, Shujuan and Liu, Yu-Shen and Han, Zhizhong},
booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
year={2023}
}