labml.ai Deep Learning Paper Implementaion
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,
The website renders these as side-by-side formatted notes. We believe these would help you understand these algorithms better.
We are actively maintaining this repo and adding new implementations almost weekly. for updates.
Modules
Transformers
✨- Multi-headed attention
- Transformer building blocks
- Transformer XL
- Compressive Transformer
- GPT Architecture
- GLU Variants
- kNN-LM: Generalization through Memorization
- Feedback Transformer
- Switch Transformer
- Fast Weights Transformer
- FNet
- Attention Free Transformer
- Masked Language Model
- MLP-Mixer: An all-MLP Architecture for Vision
- Pay Attention to MLPs (gMLP)
- Vision Transformer (ViT)
Recurrent Highway Networks
✨LSTM
✨HyperNetworks - HyperLSTM
✨ResNet
✨Capsule Networks
✨Generative Adversarial Networks
✨- Original GAN
- GAN with deep convolutional network
- Cycle GAN
- Wasserstein GAN
- Wasserstein GAN with Gradient Penalty
- StyleGAN 2
Sketch RNN
✨✨ Graph Neural Networks
Counterfactual Regret Minimization (CFR)
✨Solving games with incomplete information such as poker with CFR.
Reinforcement Learning
✨- Proximal Policy Optimization with Generalized Advantage Estimation
- Deep Q Networks with with Dueling Network, Prioritized Replay and Double Q Network.
Optimizers
✨Normalization Layers
✨- Batch Normalization
- Layer Normalization
- Instance Normalization
- Group Normalization
- Weight Standardization
- Batch-Channel Normalization
Distillation
✨Installation
pip install labml-nn
Citing LabML
If you use LabML for academic research, please cite the library using the following BibTeX entry.
@misc{labml,
author = {Varuna Jayasiri, Nipun Wijerathne},
title = {LabML: A library to organize machine learning experiments},
year = {2020},
url = {https://nn.labml.ai/},
}