Various GANs with Chainer
- Chainer==1.24.0
- OpenCV
- DCGAN: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
- ACGAN: Conditional Image Synthesis With Auxiliary Classifier GANs
- LSGAN: Least Squares Generative Adversarial Networks
- WGAN: Wasserstein GAN
- WGAN-GP: Improved Training of Wasserstein GANs
- DRAGAN: How to Train Your DRAGAN
- CramerGAN: The Cramer Distance as a Solution to Biased Wasserstein Gradients
- αGAN: Variational Approaches for Auto-Encoding Generative Adversarial Networks
By default, all models are tested on the CelabA dataset. You can find the training results in corresponding folders.
Most of recent GANs (WGAN-GP, CramerGAN, DRAGAN) contains the gradient norm regularization, this has been proved as a way to stabilize GAN training.
The current version of Chainer do not support high order derivatives, a solution is to manually implement the backward procedure with auto-differentiable chainer.functions. (Refer WGAN-GP codes for the details.)
- L.Linear, L.Convolution2D, L.Deconvolution2D, F.leaky_relu, F.relu, F.sigmoid, F.tanh, L.LayerNormalization is implemented.
- Some GAN papers suggest to use LayerNormalization instead on BatchNormalization in the discriminator in the case of gradient penalty.
Special thanks to mattya for the idea and reference codes.
Some DRAGAN results: