/GNN_latest_papers

The latest research papers on graph representation learning[图神经网络最新论文汇总,持续更新中,欢迎关注]

MIT LicenseMIT

GNN latest papers

GNN latest papers

**The latest research papers on graph representation learning**

Key Keywords: Graph representation Learning, graph pooling, graph transformer

Contents

  • [Graph representation Learning](#Graph representation Learning)
    • [Basic Models](#Basic Models)
    • [graph pooling](#graph pooling)
    • [graph transformer](#graph transformer)
    • [graph self-supervised learning](#graph self-supervised learning)
  • [GNN Applications](#GNN Applications)
    • [Chemical Molecules](#Chemical Molecules)

Graph representation Learning

Basic Models

  • [ICLR 2022] Graph-Less Neural Networks: Teaching Old MLPs New Tricks Via Distillation [Paper] [Code]
  • [ICLR 2022] LSPE: Graph Neural Networks with Learnable Structural and Positional Representations [Paper] [Code]
  • [ICLR 2022] TOGL: Topological Graph Neural Networks [Paper] [Code]
  • [ICLR 2022] GOAT: Graph Ordering Attention Networks [Paper] [Code]

graph pooling

  • [CIKM 2021] Pooling Architecture Search for Graph Classification [Paper] [Code]
  • [ICLR 2021] GMTPool: Accurate Learning of Graph Representations with Graph Multiset Pooling [Paper] [Code]
  • [NIPS 2021] GraphTrans: Representing Long-Range Context for Graph Neural Networks with Global Attention [Paper] [Code]
  • [TKDE 2021] MVPool [Paper] [Code]
  • [TPAMI 2021] TAPool: Topology-Aware Graph Pooling Networks [Paper]
  • [SIGIR 2021] CGIPool: Graph Pooling via Coarsened Graph Infomax [Paper] [Code]
  • [TNNLS 2021] Ipool: Information-Based Pooling in Hierarchical Graph Neural Networks [Paper]
  • [ICLR 2020] StructPool: Structured Graph Pooling via Conditional Random Fields [Paper] [Code]
  • [WWW 2020] GSAPool: Structure-Feature based Graph Self-adaptive Pooling [Paper] [Code]
  • [NIPS 2020] VIPool: Graph Cross Networks with Vertex Infomax Pooling [Paper] [Code]
  • [AAAI 2020] ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations [Paper] [Code]
  • [ICML 2020] HaarPool: Graph Pooling with Compressive Haar Basis [Paper] [Code]
  • [ICML 2020] minCUTPool:Spectral Clustering with Graph Neural Networks for Graph Pooling [Paper]
  • [AAAI 2020] HGP-SL: Hierarchical Graph Pooling with Structure Learning [Paper] [Code]
  • [ICML 2019] SAGPool: Self-Attention Graph Pooling [Paper] [Code]
  • [arXiv 2019] EdgePool: Edge Contraction Pooling for Graph Neural Networks [Paper] [Code]
  • [ICML 2019] gpool: Graph U-Nets [Paper] [Code]
  • [NIPS 2018] DiffPool: Hierarchical Graph Representation Learning with Differentiable Pooling [Paper] [Code]
  • [AAAI 2018] SortPool: An End-to-End Deep Learning Architecture for Graph Classification [Paper] [Code]
  • [ICLR 2016] Set2set: Order Matters: Sequence to Sequence for Sets [Paper] [Code]
  • [NIPS 2016] DCNN: Diffusion-Convolutional Neural Networks [Paper] [Code]

graph transformer

  • [ICML 2023] NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs [Paper] [Code]
  • [ICML 2023] EXPHORMER: Sparse Transformers for Graphs [Paper] [Code]
  • [arXiv 2022] Pure Transformers are Powerful Graph Learners [Paper] [Code]
  • [NIPS 2021] GraphGPS: Recipe for a General, Powerful, Scalable Graph Transformer [Paper] [Code]
  • [ICML 2022] SAT: Structure-Aware Transformer for Graph Representation Learning [Paper] [Code]
  • [NIPS 2021] Graphormer: Do Transformers Really Perform Bad for Graph Representation? [Paper] [Code]
  • [NIPS 2021] GraphTrans: Representing Long-Range Context for Graph Neural Networks with Global Attention [Paper] [Code]
  • [NIPS 2021] SAN: Rethinking Graph Transformers with Spectral Attention [Paper] [Code]
  • [arXiv 2021] GraphiT: Encoding Graph Structure in Transformers [Paper] [Code]
  • [arXiv 2020] Graph-Bert: Only Attention is Needed for Learning Graph Representations [Paper] [Code]
  • [NIPS 2020] GROVER: Self-Supervised Graph Transformer on Large-Scale Molecular Data [Paper] [Code]
  • [AAAI 2020 Workshop] graphtransformer: A Generalization of Transformer Networks to Graphs [Paper] [Code]
  • [arXiv 2021] GraphiT: Encoding Graph Structure in Transformers [Paper] [Code]
  • [NIPS 2022] NodeFormer: A Scalable Graph Structure Learning Transformer for Node Classification[Paper] [Code]
  • [ICML 2023] EXPHORMER: Sparse Transformers for Graphs[Paper] [Code]
  • [IJCAI 2023] Gapformer: Graph Transformer with Graph Pooling for Node Classifcation[Paper]
  • [ICML 2023] EXPHORMER: Sparse Transformers for Graphs[Paper] [Code]
  • [ICLR 2024] Polynormer: Polynomial-Expressive Graph Transformer in Linear Time[Paper]

graph self-supervised learning

  • [KDD 2022] GraphMAE: Self-Supervised Masked Graph Autoencoders [Paper] [Code]

GNN Applications

Chemical Molecules

  • [ICLR 2022] Molecular Contrastive Learning with Chemical Element Knowledge Graph [Paper] [Code]

The latest GNN research papers are continuously updated. If you have some questions,please email: lizhipengqilu@gmail. Besides, If you like this project, please fork or star