roatienza/Deep-Learning-Experiments

dcgan loss

eyaler opened this issue · 3 comments

in
https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py
you compute the generator loss as:
a_loss = self.adversarial.train_on_batch(noise, y)
but this also trains the discriminator using only the fake samples.
shouldn't you freeze the discriminator weights for this part?

@eyaler exactly my doubt

hmaon commented

yeah... you can change self.AM.add(self.discriminato() in adversarial_model() to this:

        dc = self.discriminator()
        for layer in dc.layers:  layer.trainable = False
        self.AM.add(dc)

You'll get a warning but the discriminator will be frozen for a_loss = self.adversarial.train_on_batch(noise, y)

I verified the change with this instrumentation code:

            print("before adversarial.train " + str(keras.backend.eval(self.adversarial.layers[1].layers[0].weights[0][0][0][0][0])))
            a_loss = self.adversarial.train_on_batch(noise, y)
            print("after  adversarial.train " + str(keras.backend.eval(self.adversarial.layers[1].layers[0].weights[0][0][0][0][0])))

in
https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py
you compute the generator loss as:
a_loss = self.adversarial.train_on_batch(noise, y)
but this also trains the discriminator using only the fake samples.
shouldn't you freeze the discriminator weights for this part?

you're right. we should freeze discriminator