VAE and CVAE pytorch implement based on MNIST
I train the VAE with a hidden layer and other parameters are in vae.ipynb
Setting latent dimension to 2, the latent space:
VAE | CVAE |
---|---|
Sample 10 image:
VAE | CVAE |
---|---|
Setting latent dimension to 8. Sample 10 image:
VAE | CVAE |
---|---|
When the latent dimension is 2, image generated by CVAE is more implict than VAE, because CVAE has label information. And when the latent dimension go higher, the result seem to be bad. I think, when the latent dimension go higher, the complexity of latent space go higher, so the distribution of data is hard to capture. What's more, the latent space sample will confuse the label information when the dimension of latent space is compared able to label dimension.