/deep-learning-implementations

These are my implementations of those well-known neural network architectures in PyTorch.

Primary LanguageJupyter NotebookMIT LicenseMIT

deep-learning-implementations

Neural Network

Autograd

  • micrograd: A tiny autograd engine with backpropogation and PyTorch-like neural network library.

Computer Vision

CNN Backbones

  • LeNet: GradientBased Learning Applied to Document Recognition (1998)
  • VGG: Very Deep Concolutional Networks For Large-Scale Image Recognition (2014)
  • GoogLeNet: Going deeper with convolutions (2014)
  • ResNet: Deep Residual Learning for Image Recognition (2015)
  • EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (2019)

GANs

  • GAN: Generative Adversarial Networks (2014)
  • DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (2015)
  • WGAN: Wasserstein GAN (2017)
  • WGAN-GP: Improved Training of Wasserstein GANs (2017)
  • CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks (2017)

Transformer-based Models

  • Transformer: Attention Is All You Need (2017)
  • SETR: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers (2020)

Diffusion Models

  • DDPM: Denoising Diffusion Probabilistic Models (2020)

UNet

  • UNet: Convolutional Networks for Biomedical Image Segmentation (2015)

Other Applications