Shaper is the software system of Hao Su's Lab that implements state-of-the-art 3D Point Cloud algorithms, including PointNet and its variants. It is written in Python and powered by the Pytorch deep learning framework.
The goal of Shaper is to provide a high-quality, high-performance codebase for point cloud research. It is designed to be flexible in order to support rapid implementation and evaluation of novel research. Shaper includes implementations of the following point cloud algorithms:
- PointNet
- PointNet++
- DGCNN
Please check Model Zoo for benchmark results.
Pytorch 1.0 with CUDA 9.0 and CUDNN 7.4.1 is the framework used.
It is recommended to use (mini)conda to manage the environment. setuptools is used to set up the python environment, so that the package is visible in PYTHONPATH.
# create anaconda environment
bash install.sh
# Remember to add develop so that all the modifications of python files could take effects.
python setup.py develop
CUDA extensions are written to speed up calculations. There are some resources to learn how to write cuda extensions for pytorch. To run models including PointNet++, DGCNN, etc., source files should be compiled.
# take DGCNN for example
cd shaper/models/dgcnn_utils
python setup.py build_ext --inplace
Shaper currently supports several datasets, like ModelNet40 and ShapeNet.
Scripts to download data are provided in shaper
.
It is recommended to create symbol links for datasets.
# take ModelNet40 for example
mkdir data
cd data
bash ../scripts/download_modelnet.sh
YACS, a simple experiment configuration system for research, is used to configure both training and testing. It is a library developed by Facebook Research and used in projects like Detectron.
python tools/train_cls.py --cfg=configs/baselines/pointnet_cls.yaml
The training logs, model weights, and tensorboard events will be saved to a directory provided in yaml. Tensorboard is supported to monitor the training status.
python tools/test_cls.py --cfg=configs/baselines/pointnet_cls.yaml
pytest is recommended for unittest, which could be installed by pip
.
It will automatically run all the functions and python files starting with test.
cd tests
# run all the files starting with "test"
pytest -s
# run all the functions starting with "test" within "test_functional.py"
pytest -s test_functional.py
- Create a new branch for new features, bug fixes, or new projects.
- Add unittest for new codes
- Add test for new models in-place
- Add test for new operators or multiple functions in
tests/
- Pull request to master if the change is useful in general.
- Reuse the codes as much as possible.
- Modular design
- Inherit from existing classes
- Add options instead of making a copy (if there are not too many options)
- Write unittest(pytest) for your codes in
tests
. - Use setup.py to build python packages and pytorch (cuda) extension.
- Create a new branch and a new folder for a new project.
- Name a tensor
- Singular form in general, e.g.
input, index, feature
. - Plural form sometimes, e.g.
points, centroids
. An alternative ispoint_cloud, centroid_set
.
- Singular form in general, e.g.
- Name the number of elements, e.g.
num_points, num_scales
- Name the module
- Singular form for nn.Module or implicit nn.ModuleList(e.g. SharedMLP)
- Plural form for explicit nn.ModuleList
- Name a sequence or dictionary, e.g.
preds, labels
. - https://google.github.io/styleguide/pyguide.html
- https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html
shaper/models/dgcnn_utils
could be a good tutorial about how to write CUDA extension.
In general, setup.py
will build extensions by compiling source files(".cpp", ".cu") within csrc
.
- https://pytorch.org/tutorials/advanced/cpp_extension.html
- https://pytorch.org/cppdocs
- https://github.com/pytorch/extension-cpp
- https://devblogs.nvidia.com/even-easier-introduction-cuda
- https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html
There exist some errors in your source codes. For example, some functions are only declared or wrong pytorch version is used.