- A Neural Probabilistic Language Model (Bengio et al., 2003)
- Attention is All You Need (Vaswani et al., 2017)
- Efficiently Modeling Long Sequences with Structured State Spaces (Gu et al., 2021)
- MaskGIT: Masked Generative Image Transformer (Chang et al., 2022)
- MAGVIT: Masked Generative Video Transformer (Yu et al., 2022)
- SoundStorm: Efficient Parallel Audio Generation (Borsos et al., 2022)
- GIVT: Generative Infinite-vocabulary Transformers (Tschannen et al., 2023)
- Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction (Tian et al., 2024)
- Alternators For Sequence Modeling (Rezaei et al., 2024)
- Deep Unsupervised Learning using Nonequilibrium Thermodynamics (Sohl-Dickstein et al., 2015)
- Generative Modeling by Estimating the Gradients of the Data Distribution (Song et al., 2019)
- Denoising Diffusion Probabilistic Models (Ho et al., 2020)
- Score-based Generative Modeling through Stochastic Differential Equations (Song et al., 2020)
- Adversarial Score Matching and Improved Sampling for Image Generation (Jolicoeur et al., 2020)
- Score-based Generative Modeling with Critically-Damped Langevin Diffusion (Dockhorn et al., 2021)
- Gotta Go Fast When Generating Data with Score-based Models (Jolicoeur et al., 2021)
- Come-closer-diffuse-faster: Accelerating Conditional Diffusion Models for Inverse Problems through Stochastic Contraction (Chung et al., 2021)
- Learning to Efficiently Sample from Diffusion Probabilistic Models (Watson et al., 2021)
- Diffusion Priors in Variational Autoencoders (Wehenkel et al., 2021)
- A Variational Perspective on Diffusion-based Generative Models and Score Matching (Huang et al., 2021)
- Variational Diffusion Models (Kingma et al., 2021)
- Improved Denoising Diffusion Probabilistic Models (Nichol et al., 2021)
- Structured Denoising Diffusion Models in Discrete State-Spaces (Austin et al., 2021)
- Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions (Hoogeboom et al., 2021)
- Autoregressive Diffusion Models (Hoogeboom et al., 2022)
- Learning Fast Samplers for Diffusion Models by Differentiating through Sample Quality (Watson et al., 2022)
- Denoising Diffusion Implicit Models (Song et al. 2022)
- Pseudo Numerical Methods for Diffusion Models on Manifolds (Liu et al., 2022)
- Elucidating the Design Space of Diffusion-based Generative Models (Karras et al., 2022)
- GENIE: Higher-Order Denoising Diffusion Solvers (Dockhorn et al., 2022)
- gDDIM: Generalizing Denoising Diffusion Implicit Models (Zhang et al., 2022)
- Fast Sampling of Diffusion Models with Exponential Integrator (Zhang et al., 2022)
- Analytic-DPM: An Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models (Bao et al., 2022)
- Progressive Distillation for Fast Sampling of Diffusion Models (Salimans et al., 2022)
- On Distillation of Guided Diffusion Models (Meng et al., 2022)
- Concrete Score Matching: Generalized Score Matching for Discrete Data (Meng et al., 2022)
- Generative Modelling with Inverse Heat Dissipation (Rissanen et al., 2022)
- Blurring Diffusion Models (Hoogeboom et al., 2022)
- Flow Matching for Generative Modeling (Lipman et al., 2022)
- Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow (Liu et al., 2022)
- Diffusion Autoencoders: Toward a Meaningful and Decodable Representation (Preechakul et al., 2022)
- Consistency Models (Song et al., 2023)
- BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping (Gu et al., 2023)
- Learning Diffusion Bridges on Constrained Domains (Liu et al., 2023)
- GenPhys: From Physical Processes to Generative Models (Ziming et al., 2023)
- Understanding Diffusion Objectives as the ELBO with Simple Data Augmentation (Kingma et al., 2023)
- Rolling Diffusion Models (Ruhe et al., 2024)
- Diffusion Models: A Comprehensive Survey of Methods and Applications (v12) (Yang et al., 2024)
- FiT: Flexible Vision Transformer for Diffusion Model (Lu et al., 2024)
- Structure Preserving Diffusion Models (Lu et al., 2024)
- Trajectory Consistency Distillation (Zheng et al., 2024)
- Scaling Rectified Flow Transformers for High-Resolution Image Synthesis (Esser et al., 2024)
- Align Your Steps: Optimizing Sampling Schedules in Diffusion Models (Sabour et al., 2024)
- Variational Schrödinger Diffusion Models (Deng et al., 2024)
- Imagine Flash: Accelerating Emu Diffusion Models with Backward Distillation (Kohler et al., 2024)
- Characteristic Learning for Provable One Step Generation (Ding et al., 2024)
- Discriminator-Guided Cooperative Diffusion for Joint Audio and Video Generation (Hayakawa et al., 2024)
- Masked Diffusion Models Are Fast Distribution Learners (Lei et al., 2024)
- Phased Consistency Models (Wang et al., 2024)
- UDPM: Upsampling Diffusion Probabilistic Models (Abu-Hussein et al., 2024)
- Fast Samplers for Inverse Problems in Iterative Refinement Models (Wang et al., 2024)
- Self-regularizing Restricted Boltzmann Machines (Loukas, 2019)
- Implicit Generation and Generalization in Energy-based Models (Du et al., 2019)
- How to Train Your Energy-based Models (Song et al., 2021)
- Learning Latent Space Hierarchical EBM Diffusion Models (Cui et al., 2024)
- Generative Adversarial Networks (Goodfellow et al., 2014)
- Conditional Generative Adversarial Nets (Mirza et al., 2014)
- Conditional Image Synthesis with Auxiliary Classifier GANs (Odena et al., 2016)
- InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets (Chen et al., 2016)
- Image-to-image Translation with Conditional Adversarial Networks (Isola et al., 2016)
- Unpaired Image-to-image Translation using Cycle-consistent Adversarial Networks (Zhu et al., 2017)
- Wasserstein GAN (Arjovsky et al., 2017)
- Improved Training of Wasserstein GANs (Gulrajani et al., 2017)
- DualGAN: Unsupervised Dual Learning for Image-to-Image Translation (Yi et al., 2017)
- Learning to Discover Cross-domain Relations with Generative Adversarial Networks (Kim et al., 2017)
- Progressive Growing of GANs for Improved Quality, Stability, and Variation (Karras et al., 2017)
- A Style-based Generator Architecture for Generative Adversarial Networks (Karras et al., 2017)
- Self-attention Generative Adversarial Networks (Zhang et al., 2018)
- Dynamically Grown Generative Adversarial Networks (Liu et al., 2021)
- VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance (Crowson et al., 2021)
- Ten Years of GANs: A Survey of the State-of-the-art (Chakraborty et al., 2023)
- A Survey on GANs for Computer Vision: Recent Research, Analysis and Taxonomy (Iglesias et al., 2024)
- Neural Processes (Garnelo et al., 2018)
- Conditional Neural Processes (Garnelo et al., 2018)
- Attentive Neural Processes (Kim et al., 2019)
- Neural Diffusion Processes (Dutordoir et al., 2022)
- The Neural Process Family: Survey, Applications and Perspectives (Jha et al., 2022)
- Spectral Convolutional Conditional Neural Processes (Mohseni et al., 2024)
- Density Estimation by Dual Ascent of the Log-likelihood (Tabak et al., 2010)
- A Family of Non-parametric Density Estimation Algorithms (Tabak et al., 2013)
- Variational Inference with Normalizing Flows (Jimenez et al., 2015)
- Density Modeling of Images using a Generalized Normalization Transformation (Balle et al., 2016)
- Density Estimation Using Real NVP (Dinh et al., 2016)
- Glow: Generative Flow with Invertible 1x1 Convolutions (Kingma et al., 2018)
- Neural Ordinary Differential Equations (Chen et al., 2018)
- FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models (Grathwohl et al., 2018)
- A RAD Approach to Deep Mixture Models (Dinh et al., 2019)
- Normalizing Flows for Probabilistic Modeling and Inference (Papamakarios et al., 2019)
- Normalizing Flows: An Introduction and Review of Current Methods (Kobyzev et al., 2019)
- Latent Normalizing Flows for Discrete Sequences (Ziegler et al., 2019)
- Discrete Flows: Invertible Generative Models of Discrete Data (Tran et al., 2019)
- Temporal Normalizing Flows (Both et al., 2019)
- Stochastic Normalizing Flows (Wu et al., 2020)
- Self Normalizing Flows (Keller et al., 2020)
- Modeling Continuous Stochastic Processes with Dynamic Normalizing Flows (Deng et al., 2020)
- Gradient Boosted Normalizing Flows (Giaquinto et al., 2020)
- Principled Interpolation of Normalizing Flows (Fadel et al., 2020)
- Lossy Image Compression with Normalizing Flows (Helminger et al., 2020)
- Multi-resolution Normalizing Flows (Voleti et al., 2021)
- Diffusion Normalizing Flow (Zhang et al., 2021)
- Implicit Normalizing Flows (Lu et al., 2021)
- Neural Flows: Efficient Alternative to Neural ODEs (Bilos et al., 2021)
- The Helmholtz Machine (Dayan et al., 1995)
- Neural Variational Inference and Learning in Belief Networks (Mnih et al., 2014)
- Autoencoding Variational Bayes (Kingma et al., 2015)
- Hierarchical Variational Models (Ranganath et al., 2015)
- Importance Weighted Autoencoders (Burda et al., 2015)
- Ladder Variational Autoencoders (Kaae et al., 2016)
- Discrete Variational Autoencoders (Rolfe, 2016)
- The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables (Maddison et al., 2016)
- Categorical Reparameterization with Gumbel-softmax (Jang et al., 2016)
- Conditional Image Generation with Gated PixelCNN Decoders (Oord et al., 2016)
- Neural Discrete Representation Learning (van den Oord et al., 2017)
- VAE With a VampPrior (Tomczak et al., 2017)
- DVAE#: Discrete Variational Autoencoders with Relaxed Boltzmann Priors (Vahdat et al., 2018)
- An Introduction to Variational Autoencoders (Kingma et al., 2019)
- Generating Diverse High-fidelity Images with VQ-VAE-2 (Razavi et al., 2019)
- Preventing Posterior Collapse with Delta-VAEs (Razavi et al., 2019)
- PixelVAE++: Improved PixelVAE with Discrete Prior (Sadeghi et al., 2019)
- VIBA: A Very Deep Hierarchy of Latent Variables for Generative Modeling (Maaloe et al., 2019)
- Taming Transformers for High-resolution Image Synthesis (Esser et al., 2020)
- DVAE++: Discrete Variational Autoencoder with Overlapping Transformations (Vahdat et al., 2018)
- NVAE: A Deep Hierarchical Variational Autoencoder (Vahdat et al., 2020)
- Dynamical Variational Autoencoders: A Comprehensive Survey (Girin et al., 2020)
- Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images (Child, 2020)
- Variational Hyper-encoding Networks (Nguyen et al., 2020)
- TimeVAE: A Variational Auto-encoder for Multivariate Time Series Generation (Desai et al., 2021)
- AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-encoders for Language Modeling (Tu et al., 2022)
- Disentangling Variational Autoencoders (Pastrana, 2022)
- Latent Variable Modelling using Variational Autoencoders: A Survey (Kalingeri, 2022)
- Efficient VDVAE: Less is More (Hazami et al., 2022)