This repository will demonstrate how to code and run inference using a variational autoencoder neural network from scratch using the PyTorch deep learning framework. The code is available in the source (src) folder.
A variational autoencoder uses the encoder-decoder architecture along with variational inference to generate data points. They use probability distributions for mapping inputs in the latent space hence providing more flexibility and better generation capabilities.
We obtain mean and variance values for all the data points after passing our inputs through the encoder network. these points are compressed to a latent representation in the bottleneck before the decoder can begin the generation process. The decoder samples new data points from the probability distribution in order to reconstruct images accurately.
To implement this repository, follow the instructions given below:
$ git clone https://github.com/01pooja10/Variational-Autoencoder
$ cd src
$ python model.py
$ python train.py
This will allow you to train your model from scratch. Don't forget to experiment with the hyperparameters for achieving better results! Further, you can generate your own version of images that resemble the MNIST dataset by running the following line
$ cd src
$ python inference.py
Thanks for visiting my repository. Hope you found it informative and useful.
Pooja Ravi
MIT © Pooja Ravi
This project is licensed under the MIT License - see the License file for details