/Performance-comparison-of-GAN-on-cifar-10

Performance comparison of ACGAN, BEGAN, CGAN, DRAGAN, EBGAN, GAN, infoGAN, LSGAN, VAE, WGAN, WGAN_GP on cifar-10

Primary LanguagePythonMIT LicenseMIT

Performance-comparison-of-GAN-on-cifar-10

Performance comparison of ACGAN, BEGAN, CGAN, DRAGAN, EBGAN, GAN, infoGAN, LSGAN, VAE, WGAN, WGAN_GP on cifar-10

Referenceļ¼šhttps://github.com/hwalsuklee/tensorflow-generative-model-collections
The original code is for data MNIST, we changed the network structure to apply to cifar-10 and test Inception Score.
The net structures are almost same.
The following results can be reproduced with command:

python main.py --dataset cifar-10 --gan_type --epoch 60 --batch_size 64

#ACGAN

#BEGAN
The result is not well,we don't pay much time to to adjust the super-parameters.

#CGAN

#DRAGAN
Stable, robust, fast convergent.

#EBGAN
The net structure is the same as BEGAN, but collapse.

#GAN

#infoGAN

#LSGAN (Least Squares GAN)

#WGAN
Not as well as paper. The net structure is the same as GAN, but converges too slowly.

#WGAN_GP
There are total 300 epochs for Discriminator, but only 60 epochs for generator (The same times as other models). Converges slowly.

#VAE
Collapsed. We also try to add or subtract bn layers, but it doesn't work.