siemanko/tf-adversarial

reparameterization trick?

Opened this issue · 1 comments

This is really cool! It looks like you were able to get good results by sampling
gen_z: np.random.uniform(-1., 1., size=(GENERATOR_BATCH,GENERATOR_SEED)).astype(np.float32)

on each train step without using the reparameterization trick, which was surprising to me (see links below). I would think that this training scheme leads to a highly discontinuous generator function, but that doesn't seem to be the case. Do you know how this worked?

Ok I am not sure If I understood the RT correctly, but If I did, then it is asking to multiply gradient by p(z) when optimizing G(z). However since z comes from uniform distribution, the trick is equivalent to multiplying gradient by a constant, which can be corrected by choosing learning rate appropriately.

Let me know if I misunderstand.