Pinned Repositories
awesome-ml4co
Awesome machine learning for combinatorial optimization papers.
block
An intelligent block matrix library for numpy, PyTorch, and beyond.
blocksparse
Efficient GPU kernels for block-sparse matrix multiplication and convolution
GCN-LPA
A tensorflow implementation of GCN-LPA
graph-pooling-papers
Papers on Graph Pooling (GNN-Pooling)
HadoopIntellijPlugin
IntelliJ IDEA Plugin for Hadoop
PaRMAT
Multi-threaded Large-Scale RMAT Graph Generator.
PM4GNN
Graph partitioning for distributed GNN training
pumpkin-book
《机器学习》(西瓜书)公式推导解析,在线阅读地址:https://datawhalechina.github.io/pumpkin-book
THU-DeepHypergraph
A pytorch library for hypergraph learning.
hookk's Repositories
hookk/AdaQP
Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training
hookk/archbase
教科书《计算机体系结构基础》(胡伟武等,第三版)的开源版本
hookk/awesome-gnn-systems
A list of awesome GNN systems.
hookk/awesome-mixture-of-experts
A collection of AWESOME things about mixture-of-experts
hookk/circuit-gnn
[ICML 2019] Circuit-GNN: Graph Neural Networks for Distributed Circuit Design http://circuit-gnn.csail.mit.edu/
hookk/d3-gnn
D3GNN: Dynamic, Distributed, Dataflow for Streaming GNNs
hookk/distgnn-examples
distributed GNN full-graph training
hookk/Distributed-GNN-Historical-Embedding
hookk/DistributedFraudDetection
In this study, we propose to use a distributed storage and computation system in order to track money transfers instantly. In particular, we keep our transaction history in a distributed file system as a graph data structure. We try to detect illegal activities by using Graph Neural Networks (GNN) in distributed manner.
hookk/DistributedGNN
DistrGNN is a project supported by a scholarship from the University of Pisa, focused on exploring and implementing model parallelism for Graph Neural Networks (GNNs). The goal is to investigate methods for efficiently distributing GNN computations across multiple computing nodes, optimizing performance and scalability.
hookk/DistributedGNNAcceleration
SoK: Distributed GNN Acceleration/Benchmarking
hookk/DistributedLP-GNN
Code implementation for DistributedLP-GNN
hookk/EasyRec
A framework for large scale recommendation algorithms.
hookk/EAT-DistGNN
hookk/fedgraph
FedGraph (Federated Graph) is a library built upon PyTorch to easily train Graph Neural Networks (GNNs) under federated (distributed) setting.
hookk/FPGA-ASIC-Roadmap
A roadmap for those who want to build a career as an FPGA / ASIC Engineer
hookk/Full-Distributed-GNN
hookk/GNNFlow
Distributed Deep Graph Learning Framework for Dynamic Graphs
hookk/GraNNDis_Artifact
[PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and mini-batch training. Provides unification of full-/mini-batch training using a novel data/communication structure.
hookk/graphlearn-for-pytorch
A GPU-accelerated graph learning library for PyTorch, facilitating the scaling of GNN training and inference.
hookk/GRU4Rec
GRU4Rec is the original Theano implementation of the algorithm in "Session-based Recommendations with Recurrent Neural Networks" paper, published at ICLR 2016 and its follow-up "Recurrent Neural Networks with Top-k Gains for Session-based Recommendations". The code is optimized for execution on the GPU.
hookk/intent-capsnet-pytorch
IntentCapsNet implementation in Pytorch
hookk/kit-app-template
Omniverse Kit App Template
hookk/langgraph
Build resilient language agents as graphs.
hookk/NeutronStarLite
A Distributed GNN system
hookk/P3-GNN
Implementation for "P3: Distributed Deep Graph Learning at Scale"
hookk/PipeQS-code
PipeQS is an adaptive quantization and staleness-aware pipeline distributed training system for GNNs. It dynamically adjusts the bit-width of message quantization and manages staleness to reduce both communication overhead and communication waiting time.
hookk/sanqus
staleness+quantization for efficient distributed gnn training
hookk/scalapack
ScaLAPACK development repository
hookk/Scaling-GNNs
Graph Neural Networks (GNNs) have become popular for processing graph-structured data, but they are often difficult to scale to large datasets. This project addresses the challenge of scaling GNNs by exploring a variety of techniques such as sampling, sparse tensor operations, and distributed training.