/Papers-in-100-Lines-of-Code

Implementation of papers in 100 lines of code.

Primary LanguagePythonMIT LicenseMIT

my badge Contributions welcome License: MIT

Papers in 100 Lines of Code

Implementation of papers in 100 lines of code.

Implemented papers

[Maxout Networks]
  • Maxout Networks [arXiv]
  • Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio
  • 2013-02-18
[Playing Atari with Deep Reinforcement Learning]
  • Playing Atari with Deep Reinforcement Learning [arXiv]
  • Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller
  • 2013-12-19
[Auto-Encoding Variational Bayes]
  • Auto-Encoding Variational Bayes [arXiv]
  • Diederik P Kingma, Max Welling
  • 2013-12-20
[Generative Adversarial Networks]
  • Generative Adversarial Networks [arXiv]
  • Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio
  • 2014-06-10
[Conditional Generative Adversarial Nets]
  • Conditional Generative Adversarial Nets [arXiv]
  • Mehdi Mirza, Simon Osindero
  • 2014-11-06
[Adam: A Method for Stochastic Optimization]
  • Adam: A Method for Stochastic Optimization [arXiv]
  • Diederik P. Kingma, Jimmy Ba
  • 2014-12-22
[NICE: Non-linear Independent Components Estimation]
  • NICE: Non-linear Independent Components Estimation [arXiv]
  • Laurent Dinh, David Krueger, Yoshua Bengio
  • 2014-10-30
[Deep Unsupervised Learning using Nonequilibrium Thermodynamics]
  • Deep Unsupervised Learning using Nonequilibrium Thermodynamics [arXiv]
  • Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, Surya Ganguli
  • 2015-03-12
[Variational Inference with Normalizing Flows]
  • Variational Inference with Normalizing Flows [arXiv]
  • Danilo Jimenez Rezende, Shakir Mohamed
  • 2015-05-21
[Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks]
  • Convolutional Generative Adversarial Networks [arXiv]
  • Alec Radford, Luke Metz, Soumith Chintala
  • 2015-11-19
[Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)]
  • Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs) [arXiv]
  • Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter
  • 2015-11-23
[Adversarially Learned Inference]
  • Adversarially Learned Inference [arXiv]
  • Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mastropietro, Alex Lamb, Martin Arjovsky, Aaron Courville
  • 2016-06-02
[Improved Techniques for Training GANs]
  • Improved Techniques for Training GANs [arXiv]
  • Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
  • 2016-06-10
[Gaussian Error Linear Units (GELUs)]
  • Gaussian Error Linear Units (GELUs) [arXiv]
  • Dan Hendrycks, Kevin Gimpel
  • 2016-06-27
[Least Squares Generative Adversarial Networks]
  • Least Squares Generative Adversarial Networks [arXiv]
  • Xudong Mao, Qing Li, Haoran Xie, Raymond Y.K. Lau, Zhen Wang, Stephen Paul Smolley
  • 2016-11-13
[Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks]
  • Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks [arXiv]
  • Chelsea Finn, Pieter Abbeel, Sergey Levine
  • 2017-03-09
[Adversarial Feature Learning]
  • Adversarial Feature Learning [arXiv]
  • Jeff Donahue, Philipp Krähenbühl, Trevor Darrell
  • 2017-04-03
[Self-Normalizing Neural Networks]
  • Self-Normalizing Neural Networks [arXiv]
  • Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
  • 2017-06-08
[Deep Image Prior]
  • Deep Image Prior [arXiv]
  • Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky
  • 2017-11-29
[On First-Order Meta-Learning Algorithms]
  • On First-Order Meta-Learning Algorithms [arXiv]
  • Alex Nichol, Joshua Achiam, John Schulman
  • 2018-03-08
[Sequential Neural Likelihood]
  • Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows [arXiv]
  • George Papamakarios, David C. Sterratt, Iain Murray
  • 2018-05-18
[On the Variance of the Adaptive Learning Rate and Beyond]
  • On the Variance of the Adaptive Learning Rate and Beyond [arXiv]
  • Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Jiawei Han
  • 2019-08-08
[Optimizing Millions of Hyperparameters by Implicit Differentiation]
  • Optimizing Millions of Hyperparameters by Implicit Differentiation [PMLR]
  • Jonathan Lorraine, Paul Vicol, David Duvenaud
  • 2019-10-06
[Implicit Neural Representations with Periodic Activation Functions]
  • Implicit Neural Representations with Periodic Activation Functions [arXiv]
  • Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein
  • 2020-06-17
[Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains]
  • Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains [arXiv]
  • Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng
  • 2020-06-18
[Likelihood-free MCMC with Amortized Approximate Ratio Estimators]
  • Likelihood-free MCMC with Amortized Approximate Ratio Estimators [PMLR]
  • Joeri Hermans, Volodimir Begy, Gilles Louppe
  • 2020-06-26
[NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis]
  • NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [arXiv]
  • Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
  • 2020-08-03
[Multiplicative Filter Networks]
  • Multiplicative Filter Networks [OpenReview]
  • Rizal Fathony, Anit Kumar Sahu, Devin Willmott, J Zico Kolter
  • 2020-09-28
[Learned Initializations for Optimizing Coordinate-Based Neural Representations]
  • Learned Initializations for Optimizing Coordinate-Based Neural Representations [arXiv]
  • Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, Ren Ng
  • 2020-12-03
[FastNeRF: High-Fidelity Neural Rendering at 200FPS]
  • FastNeRF: High-Fidelity Neural Rendering at 200FPS [arXiv]
  • Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, Julien Valentin
  • 2021-03-18
[KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs]
  • KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs [arXiv]
  • Christian Reiser, Songyou Peng, Yiyi Liao, Andreas Geiger
  • 2021-03-25
[PlenOctrees for Real-time Rendering of Neural Radiance Fields]
  • PlenOctrees for Real-time Rendering of Neural Radiance Fields [arXiv]
  • Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa
  • 2021-03-25
[NeRF--: Neural Radiance Fields Without Known Camera Parameters]
  • NeRF--: Neural Radiance Fields Without Known Camera Parameters [arXiv]
  • Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, Victor Adrian Prisacariu
  • 2021-02-14
[Gromov-Wasserstein Distances between Gaussian Distributions]
  • Gromov-Wasserstein Distances between Gaussian Distributions [arXiv]
  • Antoine Salmona, Julie Delon, Agnès Desolneux
  • 2021-08-16
[Plenoxels: Radiance Fields without Neural Networks]
  • Plenoxels: Radiance Fields without Neural Networks [arXiv]
  • Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa
  • 2021-12-09
[InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering]
  • InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [arXiv]
  • Mijeong Kim, Seonguk Seo, Bohyung Han
  • 2021-12-31
[K-Planes: Explicit Radiance Fields in Space, Time, and Appearance]
  • K-Planes: Explicit Radiance Fields in Space, Time, and Appearance [arXiv]
  • Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, Angjoo Kanazawa
  • 2023-01-24
[FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization]
  • FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization [arXiv]
  • Jiawei Yang, Marco Pavone, Yue Wang
  • 2023-03-13