/Variational-Autoencoder

Coding a variational autoencoder from scratch using the PyTorch framework

Primary LanguagePythonMIT LicenseMIT

Implementation of Variational Autoencoders

This repository will demonstrate how to code and run inference using a variational autoencoder neural network from scratch using the PyTorch deep learning framework. The code is available in the source (src) folder.

What is a VAE?

A variational autoencoder uses the encoder-decoder architecture along with variational inference to generate data points. They use probability distributions for mapping inputs in the latent space hence providing more flexibility and better generation capabilities.

architecture

We obtain mean and variance values for all the data points after passing our inputs through the encoder network. these points are compressed to a latent representation in the bottleneck before the decoder can begin the generation process. The decoder samples new data points from the probability distribution in order to reconstruct images accurately.

Instructions to run

To implement this repository, follow the instructions given below:

$ git clone https://github.com/01pooja10/Variational-Autoencoder
$ cd src 
$ python model.py 
$ python train.py

This will allow you to train your model from scratch. Don't forget to experiment with the hyperparameters for achieving better results! Further, you can generate your own version of images that resemble the MNIST dataset by running the following line

$ cd src
$ python inference.py

Thanks for visiting my repository. Hope you found it informative and useful.

Contributor

Pooja Ravi

Pooja Ravi

License

MIT © Pooja Ravi

This project is licensed under the MIT License - see the License file for details

License