georgedahl's Stars
google/jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
google-research/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
google/flax
Flax is a neural network library for JAX that is designed for flexibility.
google-research/dex-lang
Research language for array processing in the Haskell/ML family
jax-md/jax-md
Differentiable, Hardware Accelerated, Molecular Dynamics
google/jaxopt
Hardware accelerated, batchable and differentiable optimizers in JAX.
JaxGaussianProcesses/GPJax
Gaussian processes in JAX.
mlcommons/algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
google/rax
Rax is a Learning-to-Rank library written in JAX.
juliuskunze/jaxnet
Concise deep learning for JAX
google/bi-tempered-loss
Robust Bi-Tempered Logistic Loss Based on Bregman Divergences. https://arxiv.org/pdf/1906.03361.pdf
google/array_record
google/init2winit
nisarg89/spliddit
georgedahl/jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
georgedahl/flax
Flax is a neural network library for JAX that is designed for flexibility.