eriklindernoren/Keras-GAN

DCGAN can only generate noise images

leeprinxin opened this issue · 5 comments

I went through the tutorial (https://livebook.manning.com/book/gans-in-action/chapter-4/103) and tried to construct the DCGAN model.

I use the colab environment to run it. keras or tensorflow.keras: 2.4.3 or 2.4.0. tensorflow: 2.4.1

But after running it, the generator only comes up with noise images.

I tried replacing the optimiser with RMSporp and it also only produces noise images.
my code link:https://gist.github.com/leeprinxin/967ce5c24b163c68d13ec5305dea7207

In the book, they did not actually explicitly wrote the learning rate. The typical learning rate for RmsProp is 0.0002 Or 0.00005 as seen in most of the papers. This may be one of the problem as dcgan require learning rate tuning.

In the book, they did not actually explicitly wrote the learning rate. The typical learning rate for RmsProp is 0.0002 Or 0.00005 as seen in most of the papers. This may be one of the problem as dcgan require learning rate tuning.

I finally solved this problem.
I have tried to downgrade keras to version 2.3.1 and it was working.
But I don't know why keras 2.4.3 is generating noise

I encountered the same issue with InfoGAN

On the tensorflow page for BatchNormalization it says that there was a behavioral change between TF 1.x and 2

setting trainable = False on the layer means that the layer will be subsequently run in inference mode [...] This behavior only occurs as of TensorFlow 2.0. In 1.*, setting layer.trainable = False would freeze the layer but would not switch it to inference mode.

Changing the import statement for BatchNormalization to

from tensorflow.compat.v1.keras.layers import BatchNormalization

Seems to fix the issue and produces the output you'd expect (at least, in InfoGAN's case).

Note: To get the InfoGAN example script to run on the current TF build, the import statements needed to be changed to

from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Input, Dense, Reshape, Flatten, Dropout
from tensorflow.keras.layers import Activation, Embedding, ZeroPadding2D, Lambda
from tensorflow.compat.v1.keras.layers import BatchNormalization
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import UpSampling2D, Conv2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
import tensorflow.keras.backend as K

I also tried this tutorial and found that it would help to add parameters here
model.add(BatchNormalization(momentum=0.8))

Use Spectral Normalization on top of CONV2D of Discriminator will stabilize the training greatly. Also, pay attention to kernel_initializer (glorot_normal etc...)