asyml/texar

NLL on test/train for SEQGan

OlgaGolovneva opened this issue · 6 comments

Is it possible to add Likelihood-based Metrics on generated data for SeqGan evaluation? They are described in original paper and paper accompanying implementation you refer to (https://arxiv.org/pdf/1802.01886.pdf)

Evaluating likelihood is straightforward, with, e.g., texar.losses.sequence_sparse_softmax_cross_entropy. Here is an example of using the loss function:
https://github.com/asyml/texar/blob/master/examples/language_model_ptb/lm_ptb.py#L103-L106

Thanks a lot! Could you please help me also to figure out how I can change k, g, and d parameters (epochs and number of updates for discr training) mentioned in the original SeqGAN paper https://arxiv.org/pdf/1609.05473.pdf ?

Discriminator training is by the function _d_run_epoch. You may customize it for more control.

The while-loop:

while True:
   try:
       # Training op here
   except: tf.errors.OutOfRangeError:
       break

is one-epoch training.

Thank you! How can I control the number of mini-batch gradient steps that discriminator runs with the same generator input? In the while-loop, it first updates negative examples from generator, and than updates discriminator once with combination of positive and negative samples.

You may make infer_sample_ids here as a TF placeholder, and feed the same generator sample when optimizing the discriminator for multiple steps.

Thank you!