luke-a-thompson
PhD Candidate | CS & Applied Mathematics | University of Sydney
The University of SydneySydney, Australia
luke-a-thompson's Stars
lixilinx/psgd_torch
Pytorch implementation of preconditioned stochastic gradient descent (affine group preconditioner, low-rank approximation preconditioner and more)
ethansmith2000/fsdp_optimizers
supporting pytorch FSDP for optimizers
ClashLuke/HeavyBall
Efficient optimizers
lequanlin/GNN-Diff
MinkaiXu/EGNO
ICML2024: Equivariant Graph Neural Operator for Modeling 3D Dynamics
XiaShan1227/Graphormer
Do Transformers Really Perform Bad for Graph Representation? [NIPS-2021]
KellerJordan/Muon
Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead
psibi/how-to-prove
My Solution to Velleman's book
dso-org/deep-symbolic-optimization
A deep learning framework for symbolic optimization.
luke-a-thompson/AmesFormer
AmesFormer - A state-of-the-art graph transformer for mutagenicity prediction
tracel-ai/cubecl
Multi-platform high-performance compute language extension for Rust.
ChosunOne/loxide
Rust implementation of the Lox language
ekosachev/astray
Open-Markets-Initiative/Directory
General information about The Open Markets Initiative
QizhiPei/FABind
FABind: Fast and Accurate Protein-Ligand Binding (NeurIPS 2023)
blue-yonder/tsfresh
Automatic extraction of relevant features from time series:
DynamicsAndNeuralSystems/pycatch22
python implementation of catch22
normal-computing/posteriors
Uncertainty quantification with PyTorch
zhangxwww/HyperFusion
Alex313031/Thorium-Win
Chromium fork for Windows named after radioactive element No. 90; Windows builds of https://github.com/Alex313031/Thorium
openai/simple-evals
luke-a-thompson/ames_graphormer
Microsoft Graphormer rewritten in PyTorch-Geometric
glzr-io/glazewm
GlazeWM is a tiling window manager for Windows inspired by i3wm.
Wenox/fast-fw
Optimized implementation of Floyd Warshall algorithm using modern AVX2.
lsj2408/Graphormer-GD
[ICLR 2023 notable top-5%] Rethinking the Expressive Power of GNNs via Graph Biconnectivity (official implementation)
ELS-RD/kernl
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
PHD-lanyu/SpikeGraphormer
This repo is for source code of paper "SpikeGraphormer: A High-Performance Graph Transformer with Spiking Graph Attention".
leffff/graphormer-pyg
Microsoft Graphormer (https://arxiv.org/abs/2106.05234) rewritten in Pytorch-Geometric
aditya-K2/gspt
Spotify for the terminal written in Go
zhao-ht/DeepGraph
This is the code of our work Are More Layers Beneficial to Graph Transformers? published on ICLR 2023.