jordan7186
Graduate student in Yonsei University, Seoul, South Korea. Interested in ML for Graphs & Explainable AI
https://sites.google.com/site/midasyonsei/home?authuser=0Seoul, South Korea
jordan7186's Stars
3b1b/manim
Animation engine for explanatory math videos
karpathy/nanoGPT
The simplest, fastest repository for training/finetuning medium-sized GPTs.
ManimCommunity/manim
A community-maintained Python framework for creating mathematical animations.
KindXiaoming/pykan
Kolmogorov Arnold Networks
andrewyng/aisuite
Simple, unified interface to multiple Generative AI providers
huggingface/smol-course
A course on aligning smol models.
erikw/tmux-powerline
⚡️ A tmux plugin giving you a hackable status bar consisting of dynamic & beautiful looking powerline segments, written purely in bash.
graphdeeplearning/graphtransformer
Graph Transformer Architecture. Source code for "A Generalization of Transformer Networks to Graphs", DLG-AAAI'21.
jacobgil/vit-explain
Explainability for Vision Transformers
google-deepmind/clrs
mitmath/matrixcalc
MIT IAP short course: Matrix Calculus for Machine Learning and Beyond
LynnHo/Matrix-Calculus-Tutorial
Matrix Calculus via Differentials, Matrix Derivative, 矩阵求导教程
anthropics/PySvelte
A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations
LUOyk1999/tunedGNN
[NeurIPS 2024] Implementation of "Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification"
YangLing0818/VQGraph
[ICLR 2024] VQGraph: Rethinking Graph Representation Space for Bridging GNNs and MLPs
luis-mueller/probing-graph-transformers
Code for our paper "Attending to Graph Transformers"
gorokoba560/norm-analysis-of-transformer
kerighan/graph-walker
Fastest random walks generator on networkx graphs
CurryTang/LLMGNN
Label-free Node Classification on Graphs with Large Language Models (LLMS)
TomFrederik/unseal
Mechanistic Interpretability for Transformer Models
G-Taxonomy-Workgroup/GPSE
Graph Positional and Structural Encoder
apartresearch/Integer_Addition
✱ Understanding the underlying learning dynamics of simple tasks in Transformer networks
m30m/gnn-explainability
jw9730/random-walk
[ICML'24W] Revisiting Random Walks for Learning on Graphs, in PyTorch
opallab/graphalgosimulation
Source code for the paper "Simulation of Graph Algorithms with Looped Transformers"
CG80499/Attention-only-transformers
JoseRFJuniorLLMs/TransNAR
Transformers and Reasoners Algorítmicos Neurais (NARs)
AndreFCruz/hpt
Hyperparameter tuning with minimal boilerplate
jordan7186/GAtt
Source code for the GAtt method in "Faithful and Accurate Self-Attention Attribution for Message Passing Neural Networks via the Computation Tree Viewpoint".
SiliconSloth/Algoformer
Transformer model trained to run algorithms