This is the simple implementation of the paper- IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis. We only test the idea on MNIST. The LSUM and CelebA example can be found in other implementation.
- Train the model for 100 epoch:
$ python3 main.py --epochs 100
- Sample for 20 digits images
$ python3 main.py --n 20
The above figure demonstrates the loss curve during IntroVAE training. We only test on the case whose size is 64x64.