Created by Yongheng Zhao, Tolga Birdal, Jan Eric Lenssen, Emanuele Menegatti, Leonidas Guibas, Federico Tombari .
This repository contains the implementation of our ECCV 2020 paper Quaternion Equivariant Capsule Networks for 3D Point Clouds (QEC-Net). In particular, we release code for training and testing QEC-Net for classification and relative rotation estimation for 3D shapes as well as the pre-trained models for quickly replicating our results.
For an intuitive explanation of the QEC-Net, please check out ECCV oral presentation.
For the source code, please visit this github repository.
We present a 3D capsule module for processing point clouds that is equivariant to 3D rotations and translations, as well as invariant to permutations of the input points. The operator receives a sparse set of local reference frames, computed from an input point cloud and establishes end-to-end transformation equivariance through a novel dynamic routing procedure on quaternions. Further, we theoretically connect dynamic routing between capsules to the well-known Weiszfeld algorithm, a scheme for solving \emph{iterative re-weighted least squares} (IRLS) problems with provable convergence properties. It is shown that such group dynamic routing can be interpreted as robust IRLS rotation averaging on capsule votes, where information is routed based on the final inlier scores. Based on our operator, we build a capsule network that disentangles geometry from pose, paving the way for more informative descriptors and a structured latent space. Our architecture allows joint object classification and orientation estimation without explicit supervision of rotations. We validate our algorithm empirically on common benchmark datasets.
If you find our work useful in your research, please consider citing:
@article{zhao2019quaternion,
title={Quaternion Equivariant Capsule Networks for 3D Point Clouds},
author={Zhao, Yongheng and Birdal, Tolga and Lenssen, Jan Eric and Menegatti, Emanuele and Guibas, Leonidas and Tombari, Federico},
journal={arXiv preprint arXiv:1912.12098},
year={2019}
},
@inproceedings{zhao20193d,
author={Zhao, Yongheng and Birdal, Tolga and Deng, Haowen and Tombari, Federico},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
title={3D Point
Capsule Networks},
organizer={IEEE/CVF},
year={2019}
}
The code is based on PyTorch. It has been tested with Python 3.6+, PyTorch 1.1.0, CUDA 10.0(or higher) on Ubuntu 18.04. We suggest the users to build the environment with anaconda.
Install batch-wise eigenvalue decomposition package:
cd models/pytorch-cusolver
python setup.py install
cd ../models/pytorch-autograd-solver
python setup.py install
(Be aware of installing pytorch-cusolver before 'pytorch-autograd-solver')
To visualize the training process in PyTorch, consider installing TensorBoard.
pip install tensorflow==1.14
To visualize the point cloud, consider installing Open3D.
Coming soon...
Generate multiple random samples and downsample:
cd my_dataloader
python gen_downsample_index.py
Coming soon...
You can download the pre-trained models here.
- Train the classification without rotation augmentation:
python train_cls.py --inter_out_channels 128 --num_iterations 3
- Train with siamese architecture with relative rotation loss:
python train_cls_sia.py --inter_out_channels 128 --num_iterations 3
- Test Classification under unseen orientation:
python test_cls.py --inter_out_channels 128 --num_iterations 3
- Test rotation estimation with Siamese architecture:
python test_rot_sia.py
Our code is released under MIT License (see LICENSE file for details).
To do add the python cusolver add Jan's repo add 3d point cpas repo
- Add more detials of the experiment
- Add the dataset and pre-trained model with google drive link.
- Add more experiment
- Add code reference in Readme.
- Add more animations ...