tensorflow/tensorrt

tensorrt savedmodel pb can not be imported into tensorboard and serving with tf serving

superhg2012 opened this issue · 0 comments

Hi, I am using tf-trt in my Tacotron model.

My question : the tf-trt optimized saved model can not import into tensorboard and deployed through TF-Serving

tensorflow : 1.13.0-rc0
tensorrt : 5.0.2.6

My steps :

1 export savedmodel using saved_model_builder:

saved_model.pb
variables
    --   variables.data-00000-of-00001
   --    variables.index

2 tensorrt optimization

workspace_size = 1 << 30
trt.create_inference_graph(
    input_graph_def=None,
    outputs=None,
    max_batch_size=32,
    max_workspace_size_bytes=workspace_size,
    input_saved_model_dir=os.path.join(args.export_dir, args.version),
    output_saved_model_dir="FP32_savedmodel_dir",
    precision_mode="FP32")

the result is the savedmodel in "FP32_savedmodel_dir" directory

when deploying the FP32_trt_savedmodel files with TF-Serving docker, an error info :

"The TF function for the TRT segment could not be empty"

Also I tried to import the savedmodel pb to tensorboard using tensorflow tools "import_pb_to_tensorboard.py", the events was generated successfully, however when start tensorboard and load the events failed , it just hanged there.

could you help ?