TobiasLee/Chinese-Hip-pop-Generation

why your g_loss have not log function?

Masterchenyong opened this issue · 1 comments

self.g_loss = tf.reduce_sum(
tf.reduce_sum(
tf.one_hot(tf.to_int32(tf.reshape(self.x, [-1])), self.num_emb, 1.0, 0.0) *
----> tf.clip_by_value(
tf.reshape(self.g_predictions, [-1, self.num_emb]), 1e-20, 1.0)
, 1) * tf.reshape(self.rewards, [-1]) # * tf.reshape(self.target_weights, [-1])
)

self.g_loss = tf.reduce_sum(
tf.reduce_sum(
tf.one_hot(tf.to_int32(tf.reshape(self.x, [-1])), self.num_emb, 1.0, 0.0) *
----> tf.log( tf.clip_by_value(
tf.reshape(self.g_predictions, [-1, self.num_emb]), 1e-20, 1.0)
, 1) * tf.reshape(self.rewards, [-1]) # * tf.reshape(self.target_weights, [-1])
))

We use a modified adversarial loss called penalty-based objective function to encourage diversity and alleviate mode collapse. Please refer to SentiGAN for more details.