tensorflow/model-optimization

RuntimeError: Layer tf.__operators__.getitem:<class 'tf_keras.src.layers.core.tf_op_layer.SlicingOpLambda'> is not supported.

reganh98 opened this issue · 0 comments

System information

  • TensorFlow version (you are using): 2.15.1
  • tf-keras version (you are using): 2.15.1
  • tensorflow_model_optimization version (you are using): 0.8.0
  • Are you willing to contribute it (Yes/No): No

Motivation

  1. I am implementing a multi-labeled image segmentation model, which requires splitting the output to individual segmentation outputs. This is done by slicing.
  2. It will be used in mobile device requiring high-speed segmentation, necessitating quantization aware training: https://www.tensorflow.org/model_optimization/guide/quantization/training_example
  3. Although slicing is a very simple and basic operation, but it is not supported in quantization aware training and throws an error when following the quantization aware training guide.

Code ran:

  1. In jupyter notebook: %env TF_USE_LEGACY_KERAS=1
  2. Build a Keras model:
...
outputs = [preds, keras.backend.expand_dims(keras.backend.argmax(preds[:,:,:,0:2])), keras.backend.expand_dims(keras.backend.argmax(preds[:,:,:,2:4])), keras.backend.expand_dims(keras.backend.argmax(preds[:,:,:,4:6]))]
keras.Model(inputs=inputs, outputs=outputs)
  1. Quantize the model
import tensorflow_model_optimization as tfmot
quantize_model = tfmot.quantization.keras.quantize_model
q_aware_model = quantize_model(model)

Full error log:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[4], line 1
----> 1 q_aware_model = quantize_model(model)

File /usr/local/lib/python3.11/dist-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py:141, in quantize_model(to_quantize, quantized_layer_name_prefix)
    135   raise ValueError(
    136       '`to_quantize` can only either be a keras Sequential or '
    137       'Functional model.'
    138   )
    140 annotated_model = quantize_annotate_model(to_quantize)
--> 141 return quantize_apply(
    142     annotated_model, quantized_layer_name_prefix=quantized_layer_name_prefix)

File /usr/local/lib/python3.11/dist-packages/tensorflow_model_optimization/python/core/keras/metrics.py:74, in MonitorBoolGauge.__call__.<locals>.inner(*args, **kwargs)
     72 except Exception as error:
     73   self.bool_gauge.get_cell(MonitorBoolGauge._FAILURE_LABEL).set(True)
---> 74   raise error

File /usr/local/lib/python3.11/dist-packages/tensorflow_model_optimization/python/core/keras/metrics.py:69, in MonitorBoolGauge.__call__.<locals>.inner(*args, **kwargs)
     66 @functools.wraps(func)
     67 def inner(*args, **kwargs):
     68   try:
---> 69     results = func(*args, **kwargs)
     70     self.bool_gauge.get_cell(MonitorBoolGauge._SUCCESS_LABEL).set(True)
     71     return results

File /usr/local/lib/python3.11/dist-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py:500, in quantize_apply(model, scheme, quantized_layer_name_prefix)
    494 quantize_registry = scheme.get_quantize_registry()
    496 # 4. Actually quantize all the relevant layers in the model. This is done by
    497 # wrapping the layers with QuantizeWrapper, and passing the associated
    498 # `QuantizeConfig`.
--> 500 return keras.models.clone_model(
    501     transformed_model, input_tensors=None, clone_function=_quantize)

File /usr/local/lib/python3.11/dist-packages/tf_keras/src/models/cloning.py:540, in clone_model(model, input_tensors, clone_function)
    530 if isinstance(model, functional.Functional):
    531     # If the get_config() method is the same as a regular Functional
    532     # model, we're safe to use _clone_functional_model (which relies
   (...)
    535     # or input_tensors are passed, we attempt it anyway
    536     # in order to preserve backwards compatibility.
    537     if generic_utils.is_default(model.get_config) or (
    538         clone_function or input_tensors
    539     ):
--> 540         return _clone_functional_model(
    541             model, input_tensors=input_tensors, layer_fn=clone_function
    542         )
    544 # Case of a custom model class
    545 if clone_function or input_tensors:

File /usr/local/lib/python3.11/dist-packages/tf_keras/src/models/cloning.py:218, in _clone_functional_model(model, input_tensors, layer_fn)
    214 if getattr(model, "use_legacy_config", False):
    215     with keras_option_scope(
    216         save_traces=False, in_tf_saved_model_scope=True
    217     ):
--> 218         model_configs, created_layers = _clone_layers_and_model_config(
    219             model, new_input_layers, layer_fn
    220         )
    221 else:
    222     model_configs, created_layers = _clone_layers_and_model_config(
    223         model, new_input_layers, layer_fn
    224     )

File /usr/local/lib/python3.11/dist-packages/tf_keras/src/models/cloning.py:298, in _clone_layers_and_model_config(model, input_layers, layer_fn)
    295         created_layers[layer.name] = layer_fn(layer)
    296     return {}
--> 298 config = functional.get_network_config(
    299     model, serialize_layer_fn=_copy_layer
    300 )
    301 return config, created_layers

File /usr/local/lib/python3.11/dist-packages/tf_keras/src/engine/functional.py:1592, in get_network_config(network, serialize_layer_fn, config)
   1590 if isinstance(layer, Functional) and set_layers_legacy:
   1591     layer.use_legacy_config = True
-> 1592 layer_config = serialize_layer_fn(layer)
   1593 layer_config["name"] = layer.name
   1594 layer_config["inbound_nodes"] = filtered_inbound_nodes

File /usr/local/lib/python3.11/dist-packages/tf_keras/src/models/cloning.py:295, in _clone_layers_and_model_config.<locals>._copy_layer(layer)
    293     created_layers[layer.name] = InputLayer(**layer.get_config())
    294 else:
--> 295     created_layers[layer.name] = layer_fn(layer)
    296 return {}

File /usr/local/lib/python3.11/dist-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py:446, in quantize_apply.<locals>._quantize(layer)
    440 if not quantize_config:
    441   error_msg = (
    442       'Layer {}:{} is not supported. You can quantize this '
    443       'layer by passing a `tfmot.quantization.keras.QuantizeConfig` '
    444       'instance to the `quantize_annotate_layer` '
    445       'API.')
--> 446   raise RuntimeError(
    447       error_msg.format(layer.name, layer.__class__,
    448                        quantize_registry.__class__))
    450 # `QuantizeWrapper` does not copy any additional layer params from
    451 # `QuantizeAnnotate`. This should generally be fine, but occasionally
    452 # `QuantizeAnnotate` wrapper may contain `batch_input_shape` like params.
    453 # TODO(pulkitb): Ensure this does not affect model cloning.
    454 return quantize_wrapper.QuantizeWrapperV2(
    455     layer, quantize_config, name_prefix=quantized_layer_name_prefix)

RuntimeError: Layer tf.__operators__.getitem:<class 'tf_keras.src.layers.core.tf_op_layer.SlicingOpLambda'> is not supported. You can quantize this layer by passing a `tfmot.quantization.keras.QuantizeConfig` instance to the `quantize_annotate_layer` API.