official implementation of "PartSLIP++: Enhancing Low-Shot 3D Part Segmentation via Multi-View Instance Segmentation and Maximum Likelihood Estimation"
conda env create -f environment.yml
conda activate partslip++
We utilize PyTorch3D for rendering point clouds. Please install it by the following commands or its official guide:
pip install "git+https://github.com/facebookresearch/pytorch3d.git"
We incorporate GLIP and made some small changes. Please clone our modified version and install it by the following commands or its official guide:
git submodule update --init
cd GLIP
python setup.py build develop --user
We utilize cut-pursuit for computing superpoints. Please install it by the following commands or its official guide:
CONDAENV=YOUR_CONDA_ENVIRONMENT_LOCATION
cd partition/cut-pursuit
mkdir build
cd build
cmake .. -DPYTHON_LIBRARY=$CONDAENV/lib/libpython3.9.so -DPYTHON_INCLUDE_DIR=$CONDAENV/include/python3.9 -DBOOST_INCLUDEDIR=$CONDAENV/include -DEIGEN3_INCLUDE_DIR=$CONDAENV/include/eigen3
make
pip install git+https://github.com/facebookresearch/segment-anything.git
You can find the PartNet-Ensembled dataset used in the paper from here. Put downloaded data in ./data
folder.
You can find the pre-trained checkpoints from here. Please use our few-shot checkpoints for each object category. Put downloaded checkpoints in ./model
folder.
To save the superpoints and other medium results, run
python gen_sp.py
python run_partslip.py
python run_partslip++.py