jaehong31/CGES

Doing softmax twice

ggeor84 opened this issue · 2 comments

Hi guys,
thank you for your code. great work! You have a minor bug:
In mnist_model.py in mnist_conv, you return tf.nn.softmax(output)
Later, in main.py you do:
ff_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_))

You do softmax twice. Instead, you should just return output rather than tf.nn.softmax(output) in mnist_conv.

Thank you for your comment. I fixed it now.

Great, good work!