tensorflow/tensorrt

Cannot convert TF-Text Tokenizer to TensorRT

Opened this issue · 4 comments

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.4.0
  • Python version: 3.6
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: 11.0/7.6
  • GPU model and memory: RTX 2080

Describe the current behavior
Whenever I try to convert a model containing the tokenizer as subgraph, I get an error.

Describe the expected behavior
It should just convert it.

Standalone code to reproduce the issue
https://colab.research.google.com/drive/1_S37VihkTZ1B0HgjW8D7DMZcI8nwz2Bk

Other info / logs

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-7-e780e9ed6737> in <module>
      4 )
      5 
----> 6 converter.convert()

~/anaconda3/envs/ds/lib/python3.6/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py in convert(self, calibration_input_fn)
   1094                                   self._input_saved_model_tags)
   1095     func = self._saved_model.signatures[self._input_saved_model_signature_key]
-> 1096     frozen_func = convert_to_constants.convert_variables_to_constants_v2(func)
   1097     grappler_meta_graph_def = saver.export_meta_graph(
   1098         graph_def=frozen_func.graph.as_graph_def(), graph=frozen_func.graph)

~/anaconda3/envs/ds/lib/python3.6/site-packages/tensorflow/python/framework/convert_to_constants.py in convert_variables_to_constants_v2(func, lower_control_flow, aggressive_inlining)
   1069       func=func,
   1070       lower_control_flow=lower_control_flow,
-> 1071       aggressive_inlining=aggressive_inlining)
   1072 
   1073   output_graph_def, converted_input_indices = _replace_variables_by_constants(

~/anaconda3/envs/ds/lib/python3.6/site-packages/tensorflow/python/framework/convert_to_constants.py in __init__(self, func, lower_control_flow, aggressive_inlining, variable_names_allowlist, variable_names_denylist)
    804         variable_names_allowlist=variable_names_allowlist,
    805         variable_names_denylist=variable_names_denylist)
--> 806     self._build_tensor_data()
    807 
    808   def _build_tensor_data(self):

~/anaconda3/envs/ds/lib/python3.6/site-packages/tensorflow/python/framework/convert_to_constants.py in _build_tensor_data(self)
    823         data = map_index_to_variable[idx].numpy()
    824       else:
--> 825         data = val_tensor.numpy()
    826       self._tensor_data[tensor_name] = _TensorData(
    827           numpy=data,

~/anaconda3/envs/ds/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in numpy(self)
   1069     """
   1070     # TODO(slebedev): Consider avoiding a copy for non-CPU or remote tensors.
-> 1071     maybe_arr = self._numpy()  # pylint: disable=protected-access
   1072     return maybe_arr.copy() if isinstance(maybe_arr, np.ndarray) else maybe_arr
   1073 

~/anaconda3/envs/ds/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in _numpy(self)
   1037       return self._numpy_internal()
   1038     except core._NotOkStatusException as e:  # pylint: disable=protected-access
-> 1039       six.raise_from(core._status_to_exception(e.code, e.message), None)  # pylint: disable=protected-access
   1040 
   1041   @property

~/anaconda3/envs/ds/lib/python3.6/site-packages/six.py in raise_from(value, from_value)

InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.

Related, perhaps cannot convert lookup tables: tensorflow/text#486

@bixia1 It's an issue with core TF. Can you look it up ?
Thanks

I have a two line change to fix this, see my github branch

There is a corresponding fix in google, to be review within google,

Hey @bixia1 , I don't think that change actually fixed the problem, please see NVIDIA/TensorRT#2116