tensorflow/tensorrt

[Bug/Feature Request] - TF-TensorRT to support “string” datatype

vilmara opened this issue · 1 comments

Description

This bug/feature request is for native TensorRT to support “string” datatype for object detection models.

Models trained with Google AutoML include string datatype (which is not supported by TensorRT). After applying TF-TRT integration to optimize the object detection model (trained with Google AutoML), and also after applying Graph Surgeon (GS) to all of them it looks like even GS can’t fix the issue with the “String” datatype included in the non-optimized model (see below the errors).

Error with TF-TRT integration:
tensorflow/core/grappler/grappler_item_builder.cc:670] Init node index_to_string/table_init/LookupTableImportV2 doesn't exist in graph

Environment

TensorRT Version: 8.2.3
NVIDIA GPU: A2
NVIDIA Driver Version: 470.129.06
CUDA Version: 11.6
CUDNN Version:
Operating System: Ubuntu 18.04
Python Version (if applicable): 3.8.10
Tensorflow Version (if applicable): 2.7
PyTorch Version (if applicable): n/a
Baremetal or Container (if so, version): docker image nvcr.io/nvidia/tensorflow:22.02-tf2-py3

Hi @vilmara, thanks for reporting the error. Normally TF-TRT should exclude string nodes from the conversion, and convert the rest of the model. The error message suggests that there is a problem while initializing the conversion.

Could you share steps to reproduce the problem?