tensorflow/tensorrt

Failed to build TensorRTengine

MachineJeff opened this issue · 2 comments

Hi,I have used the following code to transform my saved model with TensorRT:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverter(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

after finished it, I used the following code to inference:

with tf.Session() as sess:
# First load the SavedModel into the session
tf.saved_model.loader.load(
sess, [tf.saved_model.tag_constants.SERVING], output_saved_model_dir)
output = sess.run([output_tensor], feed_dict={input_tensor: input_data})

Although I get the right output, I received the following warning message:

2019-11-01 19:11:24.404448: W tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:647] Engine creation for TRTEngineOp_22 failed. The native segment will be used instead. Reason: Internal: Failed to build TensorRT engine
2019-11-01 19:11:24.406408: I tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:632] Building a new TensorRT engine for TRTEngineOp_21 input shapes: [[1,128], [1,128]]
2019-11-01 19:11:24.406463: I tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:632] Building a new TensorRT engine for TRTEngineOp_23 input shapes: [[1,128], [1,128]]
2019-11-01 19:11:24.408900: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 2) [Fully Connected]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408921: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 5) [Scale]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408930: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for model/inference/post_cbhg/bidirectional_rnn/bw/bw/while/gru_cell/BiasAdd_1. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408938: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 1) [Shuffle]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).
2019-11-01 19:11:24.408946: W tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1467] Quantization range was not found for (Unnamed Layer* 4) [Shuffle]_output. This is okay if TensorRT does not need the range (e.g. due to node fusion).

And the cost time is the same as the origin model. I wonder if TensorRT really worked?

Besides, I also tried FP32 FP16 model, the cost time do not changed.

Copying over @MachineJeff's details from related bug (NVIDIA/TensorRT#193) as I wasn't too helpful on this one 🙁


Maybe I get the real error point
I use the following code to change the savedmodel with INT8:

converter = trt.TrtGraphConverter(
    input_saved_model_dir=input_saved_model_dir,
    is_dynamic_op = True,
    max_batch_size = 100,
    precision_mode = 'INT8',
    nodes_blacklist = ['model/inference/dense/BiasAdd'],
    maximum_cached_engines = 10000,
    use_calibration = True)
converter.convert()

converter.calibrate(
    fetch_names = ['model/inference/dense/BiasAdd'],
    num_runs = 1,
    feed_dict_fn = feed_dict_fn)

converter.save(output_saved_model_dir)

The convertion is no problem:

2019-11-04 12:26:40.096372: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:752] Optimization results for grappler item: tf_graph
2019-11-04 12:26:40.096420: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:754]   constant folding: Graph size after: 10453 nodes (-5205), 13656 edges (-6121), time = 1580.97803ms.
2019-11-04 12:26:40.096431: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:754]   layout: Graph size after: 10515 nodes (62), 13776 edges (120), time = 431.526ms.
2019-11-04 12:26:40.096438: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:754]   constant folding: Graph size after: 10515 nodes (0), 13776 edges (0), time = 704.257ms.
2019-11-04 12:26:40.096445: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:754]   TensorRTOptimizer: Graph size after: 9908 nodes (-607), 13071 edges (-705), time = 1929.90796ms.

But the calibration got some error:

2019-11-04 12:26:48.726835: I tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:716] Starting calibration thread on device 0, Calibration Resource @ 0x7fa8fc070bf0
2019-11-04 12:26:48.728281: E tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1561] Node model/inference/encoder_cbhg/bidirectional_rnn/bw/bw/while/gru_cell/concat_1 should have an input named 'model/inference/encoder_cbhg/bidirectional_rnn/bw/bw/while/gru_cell/mul' but it is not available
2019-11-04 12:26:48.728317: E tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:739] Calibration failed: Invalid argument: Node model/inference/encoder_cbhg/bidirectional_rnn/bw/bw/while/gru_cell/concat_1 should have an input named 'model/inference/encoder_cbhg/bidirectional_rnn/bw/bw/while/gru_cell/mul' but it is not available
2019-11-04 12:26:48.728381: I tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:716] Starting calibration thread on device 0, Calibration Resource @ 0x7fa910005030
2019-11-04 12:26:48.729420: E tensorflow/compiler/tf2tensorrt/convert/convert_nodes.cc:1561] Node model/inference/encoder_cbhg/bidirectional_rnn/fw/fw/while/gru_cell/concat_1 should have an input named 'model/inference/encoder_cbhg/bidirectional_rnn/fw/fw/while/gru_cell/mul' but it is not available
2019-11-04 12:26:48.729451: E tensorflow/compiler/tf2tensorrt/kernels/trt_engine_op.cc:739] Calibration failed: Invalid argument: Node model/inference/encoder_cbhg/bidirectional_rnn/fw/fw/while/gru_cell/concat_1 should have an input named 'model/inference/encoder_cbhg/bidirectional_rnn/fw/fw/while/gru_cell/mul' but it is not available

the key is:

Node ... should have an input named ... but it is not available

What happened? I am sure my model can run without error.

Hi @MachineJeff MachineJeff
I meet the same problem with you ..
Have you solve it?