/Literatures-on-GNN-Acceleration

A reading list for deep graph learning acceleration.

MIT LicenseMIT

Literature on Graph Neural Networks Acceleration

A reading list for deep graph learning acceleration, including but not limited to related research on software and hardware level. The list covers related papers, conferences, tools, books, blogs, courses and other resources. We have a team of Maintaners responsible for maintainance, meanwhile also welcome contributions from anyone.

Literatures in this page are arranged from a classification perspective, including the following topics:

Click here to view these literatures in a reverse chronological order. You can also find Related Conferences, Graph Learning Tools, Learning Materials on GNNs and Other Resources in General Resources.


Hardware Acceleration for Graph Neural Networks

  • [HPCA 2022] Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures.

    Huang Y, Zheng L, Yao P, et al. [Paper]

  • [HPCA 2022] GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design.

    You H, Geng T, Zhang Y, et al. [Paper] [GitHub]

  • [HPCA 2022] ReGNN: A Redundancy-Eliminated Graph Neural Networks Accelerator.

    Chen C, Li K, Li Y, et al. [Paper]

  • [ISCA 2022] DIMMining: pruning-efficient and parallel graph mining on near-memory-computing.

    Dai G, Zhu Z, Fu T, et al. [Paper]

  • [ISCA 2022] Hyperscale FPGA-as-a-service architecture for large-scale distributed graph neural network.

    Li S, Niu D, Wang Y, et al. [Paper]

  • [DAC 2022] Improving GNN-Based Accelerator Design Automation with Meta Learning.

    Bai Y, Sohrabizadeh A, Sun Y, et al. [Paper]

  • [CICC 2022] StreamGCN: Accelerating Graph Convolutional Networks with Streaming Processing.

    Sohrabizadeh A, Chi Y, Cong J. [Paper]

  • [IPDPS 2022] Model-Architecture Co-Design for High Performance Temporal GNN Inference on FPGA.

    Zhou H, Zhang B, Kannan R, et al. [Paper]

  • [TPDS 2022] SGCNAX: A Scalable Graph Convolutional Neural Network Accelerator With Workload Balancing.

    Li J, Zheng H, Wang K, et al. [Paper]

  • [TCSI 2022] A Low-Power Graph Convolutional Network Processor With Sparse Grouping for 3D Point Cloud Semantic Segmentation in Mobile Devices.

    Kim S, Kim S, Lee J, et al. [Paper]

  • [JSA 2022] Algorithms and architecture support of degree-based quantization for graph neural networks.

    Guo Y, Chen Y, Zou X, et al. [Paper]

  • [JSA 2022] QEGCN: An FPGA-based accelerator for quantized GCNs with edge-level parallelism.

    Yuan W, Tian T, Wu Q, et al. [Paper]

  • [FCCM 2022] GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration.

    Abi-Karam S, He Y, Sarkar R, et al. [Paper] [GitHub]

  • [FAST 2022] Hardware/Software Co-Programmable Framework for Computational SSDs to Accelerate Deep Learning Service on Large-Scale Graphs.

    Kwon M, Gouk D, Lee S, et al. [Paper]

  • [arXiv 2022] GROW: A Row-Stationary Sparse-Dense GEMM Accelerator for Memory-Efficient Graph Convolutional Neural Networks.

    Kang M, Hwang R, Lee J, et al. [Paper]

  • [arXiv 2022] Enabling Flexibility for Sparse Tensor Acceleration via Heterogeneity.

    Qin E, Garg R, Bambhaniya A, et al. [Paper]

  • [arXiv 2022] FlowGNN: A Dataflow Architecture for Universal Graph Neural Network Inference via Multi-Queue Streaming.

    Sarkar R, Abi-Karam S, He Y, et al. [Paper] [GitHub]

  • [arXiv 2022] Low-latency Mini-batch GNN Inference on CPU-FPGA Heterogeneous Platform.

    Zhang B, Zeng H, Prasanna V. [Paper]

  • [arXiv 2022] SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures.

    Lee Y, Chung J, Rhu M. [Paper]

  • [MICRO 2021] AWB-GCN: A Graph Convolutional Network Accelerator with Runtime Workload Rebalancing.

    Geng T, Li A, Shi R, et al. [Paper]

  • [MICRO 2021] Point-X: A Spatial-Locality-Aware Architecture for Energy-Efficient Graph-Based Point-Cloud Deep Learning.

    Zhang J F, Zhang Z. [Paper]

  • [HPCA 2021] GCNAX: A Flexible and Energy-efficient Accelerator for Graph Convolutional Neural Networks.

    Li J, Louri A, Karanth A, et al. [Paper]

  • [DAC 2021] DyGNN: Algorithm and Architecture Support of Dynamic Pruning for Graph Neural Networks.

    Chen C, Li K, Zou X, et al. [Paper]

  • [DAC 2021] BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant Weight Matrices.

    Zhou Z, Shi B, Zhang Z, et al. [Paper]

  • [DAC 2021] GNNerator: A Hardware/Software Framework for Accelerating Graph Neural Networks.

    Stevens J R, Das D, Avancha S, et al. [Paper]

  • [DAC 2021] PIMGCN: A ReRAM-Based PIM Design for Graph Convolutional Network Acceleration.

    Yang T, Li D, Han Y, et al. [Paper]

  • [TCAD 2021] Rubik: A Hierarchical Architecture for Efficient Graph Neural Network Training.

    Chen X, Wang Y, Xie X, et al. [Paper]

  • [TCAD 2021] Cambricon-G: A Polyvalent Energy-efficient Accelerator for Dynamic Graph Neural Networks.

    Song X, Zhi T, Fan Z, et al. [Paper]

  • [ICCAD 2021] DARe: DropLayer-Aware Manycore ReRAM architecture for Training Graph Neural Networks.

    Arka A I, Joardar B K, Doppa J R, et al. [Paper]

  • [DATE 2021] ReGraphX: NoC-Enabled 3D Heterogeneous ReRAM Architecture for Training Graph Neural Networks.

    Arka A I, Doppa J R, Pande P P, et al. [Paper]

  • [FCCM 2021] BoostGCN: A Framework for Optimizing GCN Inference on FPGA.

    Zhang B, Kannan R, Prasanna V. [Paper]

  • [SCIS 2021] Towards efficient allocation of graph convolutional networks on hybrid computation-in-memory architecture.

    Chen J, Lin G, Chen J, et al. [Paper]

  • [EuroSys 2021] Tesseract: distributed, general graph pattern mining on evolving graphs.

    Bindschaedler L, Malicevic J, Lepers B, et al. [Paper]

  • [EuroSys 2021] Accelerating Graph Sampling for Graph Machine Learning Using GPUs.

    Jangda A, Polisetty S, Guha A, et al. [Paper]

  • [ATC 2021] GLIST: Towards In-Storage Graph Learning.

    Li C, Wang Y, Liu C, et al. [Paper]

  • [CAL 2021] Hardware Acceleration for GCNs via Bidirectional Fusion.

    Li H, Yan M, Yang X, et al. [Paper]

  • [arXiv 2021] GNNIE: GNN Inference Engine with Load-balancing and Graph-Specific Caching.

    Mondal S, Manasi S D, Kunal K, et al. [Paper]

  • [arXiv 2021] LW-GCN: A Lightweight FPGA-based Graph Convolutional Network Accelerator.

    Tao Z, Wu C, Liang Y, et al. [Paper]

  • [arXiv 2021] VersaGNN: a Versatile accelerator for Graph neural networks.

    Shi F, Jin A Y, Zhu S C. [Paper]

  • [arXiv 2021] ZIPPER: Exploiting Tile- and Operator-level Parallelism for General and Scalable Graph Neural Network Acceleration.

    Zhang Z, Leng J, Lu S, et al. [Paper]

  • [HPCA 2020] HyGCN: A GCN Accelerator with Hybrid Architecture.

    Yan M, Deng L, Hu X, et al. [Paper]

  • [DAC 2020] Hardware Acceleration of Graph Neural Networks.

    Auten A, Tomei M, Kumar R. [Paper]

  • [ICCAD 2020] DeepBurning-GL: an automated framework for generating graph neural network accelerators.

    Liang S, Liu C, Wang Y, et al. [Paper]

  • [TC 2020] EnGN: A High-Throughput and Energy-Efficient Accelerator for Large Graph Neural Networks.

    Liang S, Wang Y, Liu C, et al. [Paper]

  • [SC 2020] GE-SpMM: General-Purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks.

    Huang G, Dai G, Wang Y, et al. [Paper] [GitHub]

  • [CCIS 2020] GNN-PIM: A Processing-in-Memory Architecture for Graph Neural Networks.

    Wang Z, Guan Y, Sun G, et al. [Paper]

  • [FPGA 2020] GraphACT: Accelerating GCN Training on CPU-FPGA Heterogeneous Platforms.

    Zeng H, Prasanna V. [Paper] [GitHub]

  • [ICPADS 2020] S-GAT: Accelerating Graph Attention Networks Inference on FPGA Platform with Shift Operation.

    Yan W, Tong W, Zhi X. [Paper]

  • [ASAP 2020] Hardware Acceleration of Large Scale GCN Inference.

    Zhang B, Zeng H, Prasanna V.[Paper]

  • [ICA3PP 2020] Towards a Deep-Pipelined Architecture for Accelerating Deep GCN on a Multi-FPGA Platform.

    Cheng Q, Wen M, Shen J, et al. [Paper]

  • [Access 2020] FPGAN: An FPGA Accelerator for Graph Attention Networks With Software and Hardware Co-Optimization.

    Yan W, Tong W, Zhi X. [Paper]

  • [arXiv 2020] GRIP: A Graph Neural Network Accelerator Architecture.

    Kiningham K, Re C, Levis P. [Paper]

  • [ASICON 2019] An FPGA Implementation of GCN with Sparse Adjacency Matrix.

    Ding L, Huang Z, Chen G. [Paper]


System Designs for Deep Graph Learning

  • [VLDB 2022] ByteGNN: efficient graph neural network training at large scale.

    Zheng C, Chen H, Cheng Y, et al. [Paper]

  • [EuroSys 2022] GNNLab: a factored system for sample-based GNN training over GPUs.

    Yang J, Tang D, Song X, et al. [Paper]

  • [TC 2022] Multi-node Acceleration for Large-scale GCNs.

    Sun, Gongjian, et al. [Paper]

  • [ISCA 2022] Graphite: optimizing graph neural networks on CPUs through cooperative software-hardware techniques.

    Gong Z, Ji H, Yao Y, et al. [Paper]

  • [PPoPP 2022] QGTC: accelerating quantized graph neural networks via GPU tensor core.

    Wang Y, Feng B, Ding Y. [Paper]

  • [SIGMOD 2022] NeutronStar: Distributed GNN Training with Hybrid Dependency Management.

    Wang Q, Zhang Y, Wang H, et al. [Paper]

  • [MLSys 2022] Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining.

    Kaler T, Stathas N, Ouyang A, et al. [Paper]

  • [FPGA 2022] SPA-GCN: Efficient and Flexible GCN Accelerator with Application for Graph Similarity Computation.

    Sohrabizadeh A, Chi Y, Cong J. [Paper]

  • [HPDC 2022] TLPGNN: A Lightweight Two-Level Parallelism Paradigm for Graph Neural Network Computation on GPU.

    Fu Q, Ji Y, Huang H H. [Paper]

  • [arXiv 2022] Improved Aggregating and Accelerating Training Methods for Spatial Graph Neural Networks on Fraud Detection.

    Zeng Y, Tang J. [Paper]

  • [arXiv 2022] Marius++: Large-scale training of graph neural networks on a single machine.

    Waleffe R, Mohoney J, Rekatsinas T, et al. [Paper]

  • [CLUSTER 2021] 2PGraph: Accelerating GNN Training over Large Graphs on GPU Clusters.

    Zhang L, Lai Z, Li S, et al. [Paper]

  • [JPDC 2021] Accurate, efficient and scalable training of Graph Neural Networks.

    Zeng H, Zhou H, Srivastava A, et al. [Paper] [GitHub]

  • [JPDC 2021] High performance GPU primitives for graph-tensor learning operations.

    Zhang T, Kan W, Liu X Y. [Paper]

  • [OSDI 2021] Dorylus: Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads.

    Thorpe J, Qiao Y, Eyolfson J, et al. [Paper] [GitHub]

  • [OSDI 2021] GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs

    Wang Y, Feng B, Li G, et al. [Paper] [GitHub]

  • [EuroSys 2021] DGCL: an efficient communication library for distributed GNN training.

    Cai Z, Yan X, Wu Y, et al. [Paper]

  • [EuroSys 2021] FlexGraph: a flexible and efficient distributed framework for GNN training.

    Wang L, Yin Q, Tian C, et al. [Paper] [GitHub]

  • [EuroSys 2021] Seastar: vertex-centric programming for graph neural networks.

    Wu Y, Ma K, Cai Z, et al. [Paper]

  • [TPDS 2021] Efficient Data Loader for Fast Sampling-Based GNN Training on Large Graphs.

    Bai Y, Li C, Lin Z, et al. [Paper]

  • [GNNSys 2021] FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks.

    He C, Balasubramanian K, Ceyani E, et al. [Paper] [Poster] [GitHub]

  • [GNNSys 2021] Graphiler: A Compiler for Graph Neural Networks.

    Xie Z, Ye Z, Wang M, et al. [Paper] [Poster]

  • [GNNSys 2021] IGNNITION: A framework for fast prototyping of Graph Neural Networks.

    Pujol Perich D, Suárez-Varela Maciá J R, Ferriol Galmés M, et al. [Paper] [Poster]

  • [GNNSys 2021] Load Balancing for Parallel GNN Training.

    Su Q, Wang M, Zheng D, et al [Paper] [Poster]

  • [IPDPS 2021] FusedMM: A Unified SDDMM-SpMM Kernel for Graph Embedding and Graph Neural Networks.

    Rahman M K, Sujon M H, Azad A. [Paper] [GitHub]

  • [arXiv 2021] PyTorch Geometric Temporal: Spatiotemporal Signal Processing with Neural Machine Learning Models.

    Rozemberczki B, Scherer P, He Y, et al. [Paper] [GitHub]

  • [arXiv 2021] QGTC: Accelerating Quantized GNN via GPU Tensor Core.

    Wang Y, Feng B, Ding Y. [Paper] [GitHub]

  • [arXiv 2021] TC-GNN: Accelerating Sparse Graph Neural Network Computation Via Dense Tensor Core on GPUs.

    Wang Y, Feng B, Ding Y. [Paper] [GitHub]

  • [ICCAD 2020] fuseGNN: accelerating graph convolutional neural network training on GPGPU.

    Chen Z, Yan M, Zhu M, et al. [Paper] [GitHub]

  • [SC 2020] FeatGraph: A Flexible and Efficient Backend for Graph Neural Network Systems.

    Hu Y, Ye Z, Wang M, et al. [Paper] [Github]

  • [MLSys 2020] Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc.

    Jia Z, Lin S, Gao M, et al. [Paper]

  • [CVPR 2020] L2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks.

    You Y, Chen T, Wang Z, et al. [Paper]

  • [TPDS 2020] EDGES: An Efficient Distributed Graph Embedding System on GPU Clusters.

    Yang D, Liu J, Lai J. [Paper]

  • [AccML 2020] GIN : High-Performance, Scalable Inference for Graph Neural Networks.

    Fu Q, Huang H H. [Paper]

  • [SoCC 2020] PaGraph: Scaling GNN training on large graphs via computation-aware caching.

    Lin Z, Li C, Miao Y, et al. [Paper]

  • [IPDPS 2020] Pcgcn: Partition-centric processing for accelerating graph convolutional network.

    Tian C, Ma L, Yang Z, et al. [Paper]

  • [IA3 2020] DistDGL: Distributed Graph Neural Network Training for Billion-Scale Graphs.

    Zheng D, Ma C, Wang M, et al. [Paper]

  • [CoRR 2019] Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs.

    Wang M Y. [Paper] [GitHub] [Home Page]

  • [ICLR 2019] Fast Graph Representation Learning with PyTorch Geometric.

    Fey M, Lenssen J E. [Paper] [GitHub] [Documentation]

  • [KDD 2019] AliGraph: a comprehensive graph neural network platform.

    Yang H. [Paper] [GitHub]

  • [SysML 2019] PyTorch-BigGraph: A Large-scale Graph Embedding System.

    Lerer A, Wu L, Shen J, et al. [Paper] [GitHub]

  • [ATC 2019] NeuGraph: Parallel Deep Neural Network Computation on Large Graphs.

    Ma L, Yang Z, Miao Y, et al. [Paper]

  • [arXiv 2018] Relational inductive biases, deep learning, and graph networks.

    Battaglia P W, Hamrick J B, Bapst V, et al. [Paper] [GitHub]


Algorithmic Acceleration for Graph Neural Networks

  • [AAAI 2022] Early-Bird GCNs: Graph-Network Co-Optimization Towards More Efficient GCN Training and Inference via Drawing Early-Bird Lottery Tickets.

    You H, Lu Z, Zhou Z, et al. [Paper] [GitHub]

  • [ICLR 2022] Adaptive Filters for Low-Latency and Memory-Efficient Graph Neural Networks.

    Tailor S A, Opolka F, Lio P, et al. [Paper] [GitHub]

  • [ICLR 2022] Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation.

    Zhang S, Liu Y, Sun Y, et al. [Paper] [GitHub]

  • [ICLR 2022] EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression.

    Liu Z, Zhou K, Yang F, et al. [Paper]

  • [ICLR 2022] IGLU: Efficient GCN Training via Lazy Updates.

    Narayanan S D, Sinha A, Jain P, et al. [Paper]

  • [ICLR 2022] PipeGCN: Efficient full-graph training of graph convolutional networks with pipelined feature communication.

    Wan C, Li Y, Wolfe C R, et al. [Paper] [GitHub]

  • [ICLR 2022] Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks.

    Ramezani M, Cong W, Mahdavi M, et al. [Paper]

  • [ICML 2022] Efficient Computation of Higher-Order Subgraph Attribution via Message Passing.

    Xiong et al. [Paper]

  • [ICML 2022] Generalization Guarantee of Training Graph Convolutional Networks with Graph Topology Sampling.

    Li H, Weng M, Liu S, et al. [Paper]

  • [ICML 2022] Scalable Deep Gaussian Markov Random Fields for General Graphs.

    Oskarsson J, Sidén P, Lindsten F. [Paper] [GitHub]

  • [ICML 2022] GraphFM: Improving Large-Scale GNN Training via Feature Momentum.

    Yu H, Wang L, Wang B, et al. [Paper] [GitHub]

  • [MLSys 2022] BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Boundary Node Sampling.

    Wan C, Li Y, Li A, et al. [Paper] [GitHub]

  • [MLSys 2022] Graphiler: Optimizing Graph Neural Networks with Message Passing Data Flow Graph.

    Xie Z, Wang M, Ye Z, et al. [Paper]

  • [MLSys 2022] Sequential Aggregation and Rematerialization: Distributed Full-batch Training of Graph Neural Networks on Large Graphs.

    Mostafa H. [Paper] [GitHub]

  • [WWW 2022] Fograph: Enabling Real-Time Deep Graph Inference with Fog Computing.

    Zeng L, Huang P, Luo K, et al. [Paper]

  • [www 2022] PaSca: A Graph Neural Architecture Search System under the Scalable Paradigm.

    Zhang W, Shen Y, Lin Z, et al. [Paper]

  • [www 2022] Resource-Efficient Training for Large Graph Convolutional Networks with Label-Centric Cumulative Sampling.

    Lin M, Li W, Li D, et al. [Paper]

  • [FPGA 2022] DecGNN: A Framework for Mapping Decoupled GNN Models onto CPU-FPGA Heterogeneous Platform.

    Zhang B, Zeng H, Prasanna V K. [Paper]

  • [FPGA 2022] HP-GNN: Generating High Throughput GNN Training Implementation on CPU-FPGA Heterogeneous Platform.

    Lin Y C, Zhang B, Prasanna V. [Paper]

  • [arXiv 2022] SUGAR: Efficient Subgraph-level Training via Resource-aware Graph Partitioning.

    Xue Z, Yang Y, Yang M, et al. [Paper]

  • [CAL 2022] Characterizing and Understanding Distributed GNN Training on GPUs.

    Lin H, Yan M, Yang X, et al. [Paper]

  • [ICLR 2021] Degree-Quant: Quantization-Aware Training for Graph Neural Networks.

    Tailor S A, Fernandez-Marques J, Lane N D. [Paper]

  • [ICLR 2021 Open Review] FGNAS: FPGA-AWARE GRAPH NEURAL ARCHITECTURE SEARCH.

    Lu Q, Jiang W, Jiang M, et al. [Paper]

  • [ICML 2021] GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training.

    Cai T, Luo S, Xu K, et al. [Paper]

  • [ICML 2021] Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth.

    Xu K, Zhang M, Jegelka S, et al. [Paper]

  • [KDD 2021] DeGNN: Improving Graph Neural Networks with Graph Decomposition.

    Miao X, Gürel N M, Zhang W, et al. [Paper]

  • [KDD 2021] Performance-Adaptive Sampling Strategy Towards Fast and Accurate Graph Neural Networks.

    Yoon M, Gervet T, Shi B, et al. [Paper]

  • [KDD 2021] Global Neighbor Sampling for Mixed CPU-GPU Training on Giant Graphs.

    Dong J, Zheng D, Yang L F, et al. [Paper]

  • [CVPR 2021] Binary Graph Neural Networks.

    Bahri M, Bahl G, Zafeiriou S. [Paper]

  • [CVPR 2021] Bi-GCN: Binary Graph Convolutional Network.

    Wang J, Wang Y, Yang Z, et al. [Paper] [GitHub]

  • [NeurIPS 2021] Graph Differentiable Architecture Search with Structure Learning.

    Qin Y, Wang X, Zhang Z, et al. [Paper]

  • [ICCAD 2021] G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency.

    Zhang Y, You H, Fu Y, et al. [Paper]

  • [GNNSys 2021] Efficient Data Loader for Fast Sampling-based GNN Training on Large Graphs.

    Bai Y, Li C, Lin Z, et al. [Paper] [Poster]

  • [GNNSys 2021] Effiicent Distribution for Deep Learning on Large Graphs.

    Hoang L, Chen X, Lee H, et al. [Paper] [Poster]

  • [GNNSys 2021] Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions.

    Tailor S A, Opolka F L, Lio P, et al. [Paper] [GitHub]

  • [PMLR 2021] A Unified Lottery Ticket Hypothesis for Graph Neural Networks.

    Chen T, Sui Y, Chen X, et al. [Paper]

  • [PVLDB 2021] Accelerating Large Scale Real-Time GNN Inference using Channel Pruning.

    Zhou H, Srivastava A, Zeng H, et al. [Paper] [GitHub]

  • [SC 2021] Efficient scaling of dynamic graph neural networks.

    Chakaravarthy V T, Pandian S S, Raje S, et al. [Paper]

  • [RTAS 2021] Optimizing Memory Efficiency of Graph Neural Networks on Edge Computing Platforms.

    Zhou A, Yang J, Gao Y, et al. [Paper] [GitHub]

  • [ICDM 2021] GraphANGEL: Adaptive aNd Structure-Aware Sampling on Graph NEuraL Networks.

    Peng J, Shen Y, Chen L. [Paper]

  • [GLSVLSI 2021] Co-Exploration of Graph Neural Network and Network-on-Chip Design Using AutoML.

    Manu D, Huang S, Ding C, et al. [Paper]

  • [arXiv 2021] Edge-featured Graph Neural Architecture Search.

    Cai S, Li L, Han X, et al. [Paper]

  • [arXiv 2021] GNNSampler: Bridging the Gap between Sampling Algorithms of GNN and Hardware.

    Liu X, Yan M, Song S, et al. [Paper] [GitHub]

  • [KDD 2020] TinyGNN: Learning Efficient Graph Neural Networks.

    Yan B, Wang C, Guo G, et al. [Paper]

  • [ICLR 2020] GraphSAINT: Graph Sampling Based Inductive Learning Method.

    Zeng H, Zhou H, Srivastava A, et al. [Paper] [GitHub]

  • [NeurIPS 2020] Gcn meets gpu: Decoupling “when to sample” from “how to sample”.

    Ramezani M, Cong W, Mahdavi M, et al. [Paper]

  • [SC 2020] Reducing Communication in Graph Neural Network Training.

    Tripathy A, Yelick K, Buluç A. [Paper] [GitHub]

  • [ICTAI 2020] SGQuant: Squeezing the Last Bit on Graph Neural Networks with Specialized Quantization.

    Feng B, Wang Y, Li X, et al. [Paper]

  • [arXiv 2020] Learned Low Precision Graph Neural Networks.

    Zhao Y, Wang D, Bates D, et al. [Paper]

  • [IPDPS 2019] Accurate, efficient and scalable graph embedding.

    Zeng H, Zhou H, Srivastava A, et al. [Paper]


Surveys and Performance Analysis on Graph Learning

  • [CAL 2022] Characterizing and Understanding HGNNs on GPUs.

    Yan M, Zou M, Yang X, et al. [Paper]

  • [arXiv 2022] Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis.

    Besta M, Hoefler T. [Paper]

  • [arXiv 2022] Survey on Graph Neural Network Acceleration: An Algorithmic Perspective.

    Liu X, Yan M, Deng L, et al. [Paper]

  • [GNNSys 2021] Analyzing the Performance of Graph Neural Networks with Pipe Parallelism.

    Dearing M T, Wang X. [Paper] [Poster]

  • [IJCAI 2021] Automated Machine Learning on Graphs: A Survey.

    Zhang Z, Wang X, Zhu W. [Paper]

  • [PPoPP 2021] Understanding and bridging the gaps in current GNN performance optimizations.

    Huang K, Zhai J, Zheng Z, et al. [Paper]

  • [ISCAS 2021] Characterizing the Communication Requirements of GNN Accelerators: A Model-Based Approach.

    Guirado R, Jain A, Abadal S, et al. [Paper]

  • [ISPASS 2021] GNNMark: A Benchmark Suite to Characterize Graph Neural Network Training on GPUs.

    Baruah T, Shivdikar K, Dong S, et al. [Paper]

  • [ISPASS 2021] Performance Analysis of Graph Neural Network Frameworks.

    Wu J, Sun J, Sun H, et al. [Paper]

  • [CAL 2021] Making a Better Use of Caches for GCN Accelerators with Feature Slicing and Automatic Tile Morphing.

    Yoo M, Song J, Lee J, et al. [Paper]

  • [arXiv 2021] Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective.

    Zhang H, Yu Z, Dai G, et al. [Paper]

  • [arXiv 2021] Understanding the Design Space of Sparse/Dense Multiphase Dataflows for Mapping Graph Neural Networks on Spatial Accelerators.

    Garg R, Qin E, Muñoz-Martínez F, et al. [Paper]

  • [arXiv 2021] A Taxonomy for Classification and Comparison of Dataflows for GNN Accelerators.

    Garg R, Qin E, Martínez F M, et al. [Paper]

  • [arXiv 2021] Graph Neural Networks: Methods, Applications, and Opportunities.

    Waikhom L, Patgiri R. [Paper]

  • [arXiv 2021] Sampling methods for efficient training of graph convolutional networks: A survey.

    Liu X, Yan M, Deng L, et al. [Paper]

  • [KDD 2020] Deep Graph Learning: Foundations, Advances and Applications.

    Rong Y, Xu T, Huang J, et al. [Paper]

  • [TKDE 2020] Deep Learning on Graphs: A Survey.

    Zhang Z, Cui P, Zhu W.[paper]

  • [CAL 2020] Characterizing and Understanding GCNs on GPU.

    Yan M, Chen Z, Deng L, et al. [Paper]

  • [arXiv 2020] Computing Graph Neural Networks: A Survey from Algorithms to Accelerators.

    Abadal S, Jain A, Guirado R, et al. [Paper]


Maintainers

  • Ao Zhou, Beihang University. [GitHub]
  • Yingjie Qi, Beihang University. [GitHub]
  • Tong Qiao, Beihang University. [GitHub]