tensorflow/tensorrt

error: Running multiple TensorRT optimized models in Tensorflow

fengtiaotiao1 opened this issue · 0 comments

converts the model and and builds TensorRT engine for all possible input shapes:

def input_function():
def input_function():
input_shapes = [[1, 60, 80, 3], [1, 40, 60, 3]]
for shape in input_shapes:
yield [np.random.normal(size=shape).astype(np.float32)]

conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.FP32,
maximum_cached_engines=100
)

converter = trt.TrtGraphConverterV2(
input_saved_model_dir=saved_model_path,
conversion_params=conversion_params)

converter.convert()
converter.build(input_fn=input_function)
converter.save(output_saved_model_dir=trt_fp32_model_path)

The problem is that when I am trying to load more than one TensorRT optimized model with pre-built engines, Tensorflow throws the following error:

image