/view-GCN

Pytorch code for view-GCN (CVPR2020)

Primary LanguagePython

Pytorch code for view-GCN [CVPR2020].

Xin Wei, Ruixuan Yu and Jian Sun. View-GCN: View-based Graph Convolutional Network for 3D Shape Analysis. CVPR, accepted, 2020. [pdf] [supp]

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{Wei_2020_CVPR,
author = {Wei, Xin and Yu, Ruixuan and Sun, Jian},
title = {View-GCN: View-Based Graph Convolutional Network for 3D Shape Analysis},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}

Training

Requiement

This code is tested on Python 3.6 and Pytorch 1.0 +

Dataset

First download the 20 views ModelNet40 dataset provided by [rotationnet] and put it under data

https://drive.google.com/file/d/1Z8UphI48B9KUJ9zhIhcgXaRCzZPIlztb/view?usp=sharing

Rotated-ModelNet40 dataset: ``

Aligned-ScanObjectNN dataset: https://drive.google.com/file/d/1ihR6Fv88-6FOVUWdfHVMfDbUrx2eIPpR/view?usp=sharing

Rotated-ScanObjectNN dataset: https://drive.google.com/file/d/1GCwgrfbO_uO3Qh9UNPWRCuz2yr8UyRRT/view?usp=sharing

Command for training:

python train.py -name view-gcn -num_models 0 -weight_decay 0.001 -num_views 20 -cnn_name resnet18

The code is heavily borrowed from [mvcnn-new].

We also provide a trained view-GCN network achieving 97.6% accuracy on ModelNet40.

https://drive.google.com/file/d/1qkltpvabunsI7frVRSEC9lP2xDP6cDj3/view?usp=sharing

Reference

Asako Kanezaki, Yasuyuki Matsushita and Yoshifumi Nishida. RotationNet: Joint Object Categorization and Pose Estimation Using Multiviews from Unsupervised Viewpoints. CVPR, 2018.

Jong-Chyi Su, Matheus Gadelha, Rui Wang, and Subhransu Maji. A Deeper Look at 3D Shape Classifiers. Second Workshop on 3D Reconstruction Meets Semantics, ECCV, 2018.