Pinned Repositories
wip-constrained-extractor
Work in progress inference, learning, and evaluation code for extractive summarization.
wip-lambada-lm
LSTM language model on LAMBADA dataset
flax
Flax is a neural network library for JAX that is designed for flexibility.
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
google-research
Google Research
tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
init2winit
jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
algorithmic-efficiency
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models.
georgedahl's Repositories
georgedahl/jax
Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and more
georgedahl/flax
Flax is a neural network library for JAX that is designed for flexibility.