Pinned Repositories
LightGCN
stark
STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases (https://stark.stanford.edu/)
DIR-GNN
Official code of "Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022)
DISC
Official code of "Discover and Cure: Concept-aware Mitigation of Spurious Correlation" (ICML 2023)
Falcon
FALCON-GLFrontiers
Official Code of "Efficient Automatic Graph Learning via Design Relations" (GLFrontiers Workshop NeurIPS 2022)
GraphMETRO
Early release of the official implementation for "GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts"
LightGCN-parallelized-version
version: parallelize negative sampling on CPU
ReFine
Official code of "Towards Multi-Grained Explainability for Graph Neural Networks" (NeurIPS 2021) + Pytorch Implementation of recent attribution methods for GNNs
avatar
AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval (https://arxiv.org/abs/2406.11200)
Wuyxin's Repositories
Wuyxin/DIR-GNN
Official code of "Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022)
Wuyxin/ReFine
Official code of "Towards Multi-Grained Explainability for Graph Neural Networks" (NeurIPS 2021) + Pytorch Implementation of recent attribution methods for GNNs
Wuyxin/DISC
Official code of "Discover and Cure: Concept-aware Mitigation of Spurious Correlation" (ICML 2023)
Wuyxin/LightGCN-parallelized-version
version: parallelize negative sampling on CPU
Wuyxin/GraphMETRO
Early release of the official implementation for "GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts"
Wuyxin/FALCON-GLFrontiers
Official Code of "Efficient Automatic Graph Learning via Design Relations" (GLFrontiers Workshop NeurIPS 2022)
Wuyxin/Falcon
Wuyxin/Archive
Personal Academic Archiving
Wuyxin/graph-research-dir
Wuyxin/LightGCN
Wuyxin/NeuRec
Next RecSys Library
Wuyxin/nni
An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Wuyxin/Wuyxin