FastGCN: Method and Theory
This repository contains a Matlab implementation of FastGCN (Chen et al. 2018), as well as companion codes for the optimization theory paper Chen and Luss (2018). The Matlab code for FastGCN is observed to be substantially faster than other implementations in Tensorflow or PyTorch. The theory paper, which explains stochastic gradient descent with biased but consistent gradient estimators, is the driver behind FastGCN.
For the original FastGCN code published with the paper (implemented in Tensorflow), see https://github.com/matenure/FastGCN.
FastGCN
See the directory fastgcn
. Start from test_fastgcn.m
.
Reference
Jie Chen, Tengfei Ma, and Cao Xiao. FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling. In ICLR, 2018.
SGD with Biased but Consistent Gradient Estimators
See the directory sgd_paper
. Start from test_1layer.m
and test_2layer.m
.
Reference
Jie Chen and Ronny Luss. Stochastic Gradient Descent with Biased but Consistent Gradient Estimators. Preprint arXiv:1807.11880, 2018.