/FeaStNet

Pytorch reproduction of the paper "FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis" (CVPR 18)

Primary LanguagePythonMIT LicenseMIT

FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis

This is a Pytorch implementation of Feature-Steered Graph Convolutions (FeaStNet) for the task of dense shape correspondence on FAUST human dataset, as described in the paper:

Verma et al, FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis (CVPR 2018)

This implementation produces results better than those shown in the paper with the exactly same network architecture.

Requirements

FeaStNet

As a typical attention-based operator, FeaStNet learns a soft mapping from vertices to filter weights. The convolution is:

where assignment function . Let , which results int translation invariance of the weights in the feature space, giving much better performance.

We provide efficient Pytorch implementation of the translation-invariant version of this operator FeaStConv. You should also be able to access this operator from the Pytorch Geometric Library.

Run

python main.py

Data

In order to use your own dataset, you can simply create a regular python list holding torch_geometric.data.Data objects and specify the following attributes:

  • data.x: Node feature matrix with shape [num_nodes, num_node_features]
  • data.edge_index: Graph connectivity in COO format with shape [2, num_edges] and type torch.long
  • data.edge_attr: Pesudo-coordinates with shape [num_edges, pesudo-coordinates-dim]
  • data.y: Target to train against

Cite

Please cite this paper if you use this code in your own work:

@inproceedings{verma2018feastnet,
  title={Feastnet: Feature-steered graph convolutions for 3d shape analysis},
  author={Verma, Nitika and Boyer, Edmond and Verbeek, Jakob},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2598--2606},
  year={2018}
}