tensorflow/tensorrt

INT8 quantization core dumped

zhxjlbs opened this issue · 0 comments

Hi, I use the code to convert my yolov3 to FP16. the converted model is faster. but when convert my yolov3 model to INT8-trt, it throw an core dumped error. How to solve this problem? I have no idea. could you give me some help?

system information:
tensorRT: https://developer.download.nvidia.com/compute/machine-learning/repos/rhel7/x86_64/nvidia-machine-learning-repo-rhel7-1.0.0-1.x86_64.rpm
tensorflow: 1.15
cuda10.0

the error info:
TensorRT precision mode: INT8
Begin conversion.
terminate called after throwing an instance of 'std::out_of_range'
what(): _Map_base::at
convert_to_trt.sh: line 23: 17444 Aborted (core dumped)

and my code as follows:
def feed_dict_fn():
# read batch of images
batch_images = np.random.normal(0,0.1,(calib_batch_size,208,208,3))
return {'inputs' + ':0': batch_images}

    converter = trt.TrtGraphConverter(
        input_graph_def=graph_def,
        precision_mode=trt_precision_mode,
        nodes_blacklist=out_names,
        max_workspace_size_bytes=79869184,
        minimum_segment_size=2,
        maximum_cached_engines=6,
        is_dynamic_op=True,
        use_calibration=True
        )
    trt_graph_def = converter.convert()

    trt_graph_def = converter.calibrate(
        fetch_names=out_names,
        num_runs=num_batches,
        feed_dict_fn=feed_dict_fn)