NVIDIA-AI-IOT/tf_trt_models

Core Dump where create_inference_graph

lannyyip opened this issue · 1 comments

I follow the the code example to convert ssd_mobilenet2 models to trt model in Jetson Nano, as below:

from tf_trt_models.detection import build_detection_graph
import tensorflow.contrib.tensorrt as trt
import tensorflow as tf

config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))

frozen_graph, input_names, output_names = build_detection_graph(
    config='/mypath/ssd_mobilenet_v2_coco.config',
    checkpoint='/mypath/model.ckpt-33825'
)

trt_graph = trt.create_inference_graph(
    input_graph_def=frozen_graph,
    outputs=output_names,
    max_batch_size=1,
    max_workspace_size_bytes=1 << 25,
    precision_mode='FP16',
    minimum_segment_size=50
)

For sure the model is trained in amd64 platform.
While executing "create_inference_graph", following core dump generate.

2019-09-27 14:58:20.320137: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger (Unnamed Layer* 3) [Convolution]: at least three non-batch dimensions are required for input
2019-09-27 14:58:20.320454: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:41] DefaultLogger (Unnamed Layer* 9) [Convolution]: at least three non-batch dimensions are required for input
Segmentation fault (core dumped)

Not sure whether the problem related to following warning.

WARNING:tensorflow:TensorRT mismatch. Compiled against version 5.1.6, but loaded 5.0.6. Things may not work
WARNING:tensorflow:TensorRT mismatch. Compiled against version 5.1.6, but loaded 5.0.6. Things may not work

Could any one give me a hand on it? Thank you.

Have you found the solution? i am also facing the same problem. @lannyyip