/Variational-AutoEncoder

VAE and CVAE pytorch implement based on MNIST

Primary LanguageJupyter Notebook

Variational-AutoEncoder

VAE and CVAE pytorch implement based on MNIST

Experiment

I train the VAE with a hidden layer and other parameters are in vae.ipynb

Setting latent dimension to 2, the latent space:

VAE CVAE
VAE without conditional VAE with conditional

Sample 10 image:

VAE CVAE
VAE without conditional VAE with conditional

Setting latent dimension to 8. Sample 10 image:

VAE CVAE
VAE without conditional VAE with conditional

Summary

When the latent dimension is 2, image generated by CVAE is more implict than VAE, because CVAE has label information. And when the latent dimension go higher, the result seem to be bad. I think, when the latent dimension go higher, the complexity of latent space go higher, so the distribution of data is hard to capture. What's more, the latent space sample will confuse the label information when the dimension of latent space is compared able to label dimension.

Reference