hwalsuklee/tensorflow-mnist-cnn

Possible bug with is_training parameter

lucgiffon opened this issue · 0 comments

Hello,

Going through the code of your project, I think the parameter is_training is not taken into account for the CNN model in file mnist_cnn_train.py.

I've seen that the cnn_model.CNN function takes "is_training" argument with default equals True which prevent the code to crash.

In mnist_cnn_train, you define the is_training placeholder but you don't use it when calling the cnn_model.CNN function. You use it in the training and testing loops of the same file so I assume this is not an intended behavior.

I've not tested it yet, but I think the is_training entry of the feed_dict is just ignored and this cause dropout to be applied during the testing loop (same goes for batch normalization). This bug could be the cause of the issue #1