This repository is provides a standardized implementation framework for various popular decoder-based deep generative models. The following models are currently implemented
- VAE
- VAE (autoregressive inference)
In all cases, posterior regularization is applied to disentangle style (z) from content/label (y) on the SVHN dataset.
TODO:
- AC-GAN/InfoGAN (see this repo)
- BEGAN (see this repo)
- WGAN
You'll need
tensorflow==1.1.0
scipy==0.19.0
tensorbayes==0.3.0
tensorflow==1.4.0
All execution scripts adhere to the following format
python run_*.py --cmd
A list of possible commandline arguments can be found in each run_*.py
script. The default commandline arguments sets up a semi-supervised regime where only 1000 of the training samples are labeled. This means posterior regularization is done in a semi-supervised manner. In the case of VAE, pay attention to the choice of encoder/decoder architecture controlled by the argument --design
, as this influences whether autoregression is applied during inference. Tensorboard logs are automatically saved to ./log/
and models are saved to ./checkpoints/
.
z-space and y-space interpolations provided respectively.
No noticeable difference from vanilla VAE. I'd be curious to see if top-down inference makes a difference.