ftramer/ensemble-adv-training

how to save the problem of "No gradients provided for any variable"

Closed this issue · 2 comments

I try to run the code, but I always have the same problem:

Is there someone know how to solve this problem?

tensorflow/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Using TensorFlow backend.
('X_train shape:', (60000, 28, 28, 1))
(60000, 'train samples')
(10000, 'test samples')
Loaded MNIST test data.
Traceback (most recent call last):
  File "/anaconda2/envs/tensorflow/lib/python2.7/runpy.py", line 174, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "/anaconda2/envs/tensorflow/lib/python2.7/runpy.py", line 72, in _run_code
    exec code in run_globals
  File "/DeepLearning/ensemble-adv-training/train.py", line 57, in <module>
    main(args.model, args.type)
  File "/DeepLearning/ensemble-adv-training/train.py", line 39, in main
    tf_train(x, y, model, X_train, Y_train, data_gen)
  File "tf_utils.py", line 79, in tf_train
    optimizer = tf.train.AdamOptimizer().minimize(loss)
  File "/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 350, in minimize
    ([str(v) for _, v in grads_and_vars], loss))
ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables ["<tf.Variable 'conv2d_1/kernel:0' shape=(5, 5, 1, 64) dtype=float32_ref>", "<tf.Variable 'conv2d_1/bias:0' shape=(64,) dtype=float32_ref>", "<tf.Variable 'conv2d_2/kernel:0' shape=(5, 5, 64, 64) dtype=float32_ref>", "<tf.Variable 'conv2d_2/bias:0' shape=(64,) dtype=float32_ref>", "<tf.Variable 'dense_1/kernel:0' shape=(25600, 128) dtype=float32_ref>", "<tf.Variable 'dense_1/bias:0' shape=(128,) dtype=float32_ref>", "<tf.Variable 'dense_2/kernel:0' shape=(128, 10) dtype=float32_ref>", "<tf.Variable 'dense_2/bias:0' shape=(10,) dtype=float32_ref>"] and loss Tensor("Mean:0", shape=(), dtype=float32).

This appears to be an issue with the K.categorical_crossentropy call in gen_adv_loss() in attack_utils.py. For some reason, there seems to be no gradient defined for this function anymore. Not sure why... Replacing it with an explicit call to Tensorflow's softmax_cross_entropy_with_logits function seems to do the trick.

Thank you. It works.