SeungjunNah/DeepDeblur_release

function for evaluating the discriminator loss

Closed this issue · 1 comments

Line 198 of train.lua that computes the entropy.fake is using the output_label which was computed using the generator output (line 179) before the backward pass (line 189).

I was wondering if that is correct? Shouldn't entropy.fake for discriminator (line 198) be estimated using the generator output which is obtained with the updated weights of the generator after the backward pass of line 189?

Hi @prashnani,
I couldn't help but respond very late now since I wasn't available during my Korean army training.

It is the matter of how to optimize CNNs.
As you have noticed, I didn't follow the original GAN optimization method: first train discriminator and then optimize generator by fitting to deceive discriminator.
I think this is because pure GANs don't have any guide to generator except the trained discriminator.

However, in this case, we have hybrid loss: mse loss and adversarial loss.
Then, many possible order can exist.
For example,

  1. update G by mse loss -> update D -> update G by deceiving D
  2. update D with initial estimate -> update G by deceiving D -> update G by mse loss
  3. update D with initial estimate -> update G by mse loss -> update G by deceiving D
  4. update D with initial estimate -> update G by mse loss and by deceiving D simultaneously
  5. ...
  6. update G by mse loss and by deceiving D simultaneously || update D with the previous G output

Under many combinations, I simply chose to combine every loss into one, and calculate gradient in single backward pass, as it reduces backward calculation, while the loss itself is not wrong.
Note that in equation 5 and 6 in the paper, I didn't write the order of optimization as original GAN paper equation 1 which means every loss is simultaneously optimized.