rstudio-conf-2020/dl-keras-tf

Error: `tf.gradients is not supported when eager execution is enabled`

OmaymaS opened this issue · 3 comments

Issue

Using k_gradients() which calls keras$backend$gradients() results in the following error:

Error in py_call_impl(callable, dots$args, dots$keywords) : RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.

grads <- k_gradients(african_elephant_output, last_conv_layer$output)[[1]]

Not sure if this is related to sth in version/environment.

Related Issues

tensorflow/tensorflow#34235

Session Info

> sessionInfo()
R version 3.5.0 (2018-04-23)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS  10.14.6

Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] keras_2.2.5.0

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.2       here_0.1         lattice_0.20-35  rprojroot_1.3-2 
 [5] zeallot_0.1.0    grid_3.5.0       R6_2.4.0         backports_1.1.4 
 [9] jsonlite_1.6     magrittr_1.5     tfruns_1.4       whisker_0.3-2   
[13] Matrix_1.2-14    reticulate_1.13  generics_0.0.2   tools_3.5.0     
[17] xfun_0.8         compiler_3.5.0   base64enc_0.1-3  tensorflow_2.0.0
[21] knitr_1.24

Good catch @OmaymaS. That is a notebook that I haven't really done anything with. I grabbed it from JJ Allaire's book repo but planned to revise it and base it on the Cats vs. Dogs model. When I do the revision I will try work out this issue.

I was catching up on the CNN visualization notebooks today. There's two ways to resolve this:

  1. Using TF 2.0 GradientTape as in here.
  2. Disabling eager execution at the top of the script with tf$compat$v1$disable_eager_execution().

Neither of them have side effects on the rest of the notebook code chunks so I'll probably just use tf$compat$v1$disable_eager_execution().

I was catching up on the CNN visualization notebooks today. There's two ways to resolve this:

  1. Using TF 2.0 GradientTape as in here.
  2. Disabling eager execution at the top of the script with tf$compat$v1$disable_eager_execution().

Neither of them have side effects on the rest of the notebook code chunks so I'll probably just use tf$compat$v1$disable_eager_execution().

Doing tihs also results in an error (I'm also following the cats vs dogs example on "Visualizing convnet filters"):
Error in py_call_impl():

! TypeError: You are passing KerasTensor(type_spec=TensorSpec(shape=(), dtype=tf.float32, name=None), name='tf.math.reduce_mean/Mean:0', description="created by layer 'tf.math.reduce_mean'"), an intermediate Keras symbolic input/output, to a TF API that does not allow registering custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or `tf.map_fn`. Keras Functional model construction only supports TF API calls that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work around this limitation by putting the operation in a custom Keras layer `call` and calling that layer on this symbolic input/output.