google-research/deeplab2

Script for converting the model to TF-Lite framework

prabal27 opened this issue · 4 comments

Hello! I wanted to convert the generated model to TF-Lite format. I first froze the model using the export_model.py script and then used tflite_convert script. Is it the right thing to do or you guys recommend using any special instructions while generating the TF-Lite model from the exported model?

Hi @prabal27,

Thanks for the issue.
We have not verified converting the model to TF-Lite framework, and thus we are not sure if it is fully supported.

Cheers,

Hello @aquariusjay ,
While using the "tf.lite.TFLiteConverter.from_saved_model()" fuction to convert the model, it is throwing following error:

###################################################################################################

ConverterError Traceback (most recent call last)
Input In [1], in <cell line: 10>()
4 # converter.target_spec.supported_ops = [
5 # tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
6 # tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
7 # ]
8 converter.optimizations = [tf.lite.Optimize.DEFAULT]
---> 10 tflite_model = converter.convert()
11 open("exported-model/TF-Lite/DeepLabv3Plus_model_new.tflite", "wb").write(tflite_model)

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/lite.py:929, in _export_metrics..wrapper(self, *args, **kwargs)
926 @functools.wraps(convert_func)
927 def wrapper(self, *args, **kwargs):
928 # pylint: disable=protected-access
--> 929 return self._convert_and_export_metrics(convert_func, *args, **kwargs)

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/lite.py:908, in TFLiteConverterBase._convert_and_export_metrics(self, convert_func, *args, **kwargs)
906 self._save_conversion_params_metric()
907 start_time = time.process_time()
--> 908 result = convert_func(self, *args, **kwargs)
909 elapsed_time_ms = (time.process_time() - start_time) * 1000
910 if result:

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/lite.py:1212, in TFLiteSavedModelConverterV2.convert(self)
1207 else:
1208 self._debug_info = _get_debug_info(
1209 _convert_debug_info_func(self._trackable_obj.graph_debug_info),
1210 graph_def)
-> 1212 return self._convert_from_saved_model(graph_def)

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/lite.py:1095, in TFLiteConverterBaseV2._convert_from_saved_model(self, graph_def)
1092 converter_kwargs.update(self._get_base_converter_args())
1093 converter_kwargs.update(quant_mode.converter_flags())
-> 1095 result = _convert_saved_model(**converter_kwargs)
1096 return self._optimize_tflite_model(
1097 result, quant_mode, quant_io=self.experimental_new_quantizer)

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py:212, in convert_phase..actual_decorator..wrapper(*args, **kwargs)
210 else:
211 report_error_message(str(converter_error))
--> 212 raise converter_error from None # Re-throws the exception.
213 except Exception as error:
214 report_error_message(str(error))

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/convert_phase.py:205, in convert_phase..actual_decorator..wrapper(*args, **kwargs)
202 @functools.wraps(func)
203 def wrapper(*args, **kwargs):
204 try:
--> 205 return func(*args, **kwargs)
206 except ConverterError as converter_error:
207 if converter_error.errors:

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/convert.py:809, in convert_saved_model(**kwargs)
807 model_flags = build_model_flags(**kwargs)
808 conversion_flags = build_conversion_flags(**kwargs)
--> 809 data = convert(
810 model_flags.SerializeToString(),
811 conversion_flags.SerializeToString(),
812 input_data_str=None,
813 debug_info_str=None,
814 enable_mlir_converter=True)
815 return data

File ~/anaconda3/lib/python3.9/site-packages/tensorflow/lite/python/convert.py:311, in convert(model_flags_str, conversion_flags_str, input_data_str, debug_info_str, enable_mlir_converter)
309 for error_data in _metrics_wrapper.retrieve_collected_errors():
310 converter_error.append_error(error_data)
--> 311 raise converter_error
313 return _run_deprecated_conversion_binary(model_flags_str,
314 conversion_flags_str, input_data_str,
315 debug_info_str)

ConverterError: :0: error: loc(callsite(callsite(fused["Cast:", "truediv/Cast_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.Cast' op is neither a custom op nor a flex op
:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
:0: note: loc(callsite(callsite(fused["Cast:", "truediv/Cast_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: loc(callsite(callsite(fused["Cast:", "truediv_1/Cast_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.Cast' op is neither a custom op nor a flex op
:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
:0: note: loc(callsite(callsite(fused["Cast:", "truediv_1/Cast_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: loc(callsite(callsite(fused["RealDiv:", "truediv@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.RealDiv' op is neither a custom op nor a flex op
:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
:0: note: loc(callsite(callsite(fused["RealDiv:", "truediv@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: loc(callsite(callsite(fused["RealDiv:", "truediv_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.RealDiv' op is neither a custom op nor a flex op
:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
:0: note: loc(callsite(callsite(fused["RealDiv:", "truediv_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: loc(callsite(callsite(fused["Minimum:", "Minimum@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.Minimum' op is neither a custom op nor a flex op
:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
:0: note: loc(callsite(callsite(fused["Minimum:", "Minimum@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: loc(callsite(callsite(fused["Cast:", "Cast_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.Cast' op is neither a custom op nor a flex op
:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
:0: note: loc(callsite(callsite(fused["Cast:", "Cast_1@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: loc(callsite(callsite(fused["ResizeNearestNeighbor:", "resize_align_corners_1/ResizeNearestNeighbor@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): 'tf.ResizeNearestNeighbor' op is neither a custom op nor a flex op
:0: note: loc(fused["StatefulPartitionedCall:", "StatefulPartitionedCall"]): called from
:0: note: loc(callsite(callsite(fused["ResizeNearestNeighbor:", "resize_align_corners_1/ResizeNearestNeighbor@__inference___call___6210"] at fused["StatefulPartitionedCall:", "StatefulPartitionedCall@__inference_signature_wrapper_6889"]) at fused["StatefulPartitionedCall:", "StatefulPartitionedCall"])): Error code: ERROR_NEEDS_FLEX_OPS
:0: error: failed while converting: 'main':
Some ops are not supported by the native TFLite runtime, you can enable TF kernels fallback using TF Select. See instructions: https://www.tensorflow.org/lite/guide/ops_select
TF Select ops: Cast, Minimum, RealDiv, ResizeNearestNeighbor
Details:
tf.Cast(tensor) -> (tensor) : {Truncate = false, device = ""}
tf.Cast(tensor) -> (tensor) : {Truncate = false, device = ""}
tf.Minimum(tensor, tensor) -> (tensor) : {device = ""}
tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""}
tf.ResizeNearestNeighbor(tensor<1x?x?x1xi32>, tensor<2xi32>) -> (tensor<1x?x?x1xi32>) : {align_corners = true, device = "", half_pixel_centers = false}

###################################################################################################

I guess there are some operations which are only present in the native Tensorflow library and not in Tensorflow-lite. I don't want to fall back to Tensorflow operations because my edge application has only TF-Lite interpreter. I will be really grateful if I can get any help related to the issue.

The old DeepLab repo seems to have a script for generating TensorFlow-lite models called "convert_to_tflite.py". So, should I work with that repo if I want to generate TFLite models or are you guys planning to add a script for it?

Hi @prabal27,

Thanks for asking.
We currently have no plan to add TF-Lite support yet, given the limited bandwith.

Cheers,