Alternating training vs. DCGAN?
meder411 opened this issue · 0 comments
meder411 commented
I'm a bit confused about the generator and discriminator training. Perhaps it's just the semantics, but the DCGAN "starter" code that many published GANs often use (and is promoted here), performs a discriminator update and a generator update for each minibatch. However, you talk about (warn against) using losses to balance statistics in training the generator and discriminator. This suggests that you freeze one for a while and only train the other, and vice versa.
So which is it: both are updated each minibatch, or you alternate training one network at a time (for N iterations)?