NVIDIA-AI-IOT/tf_to_trt_image_classification

test_trt error "Mismatch between allocated memory size and expected size of serialized engine"

leonardopsantos opened this issue · 0 comments

I'm trying to run the tf_to_trt_image_classification app, but it only runs the Inception_v1, vgg_16 and mobilenet_v1_0p5_160 NNs. I'm using a Jetson TX2 with JetPack 3.2.

I've followed the tutorial, but the test_trt binary crashes with the error:

test_trt: cudnnEngine.cpp:640: bool nvinfer1::cudnn::Engine::deserialize(const void*, std::size_t, nvinfer1::IPluginFactory*): Assertion 'size >= bsize && "Mismatch between allocated memory size and expected size of serialized engine."' failed.

I've tried running it with TensorFlow 1.5.0 pip wheel from the tutorial and with a TensorFlow 1.7 wheel from this topic.

The second line from src/test/test_trt.cu is failing:

IRuntime *runtime = createInferRuntime(gLogger); ICudaEngine *engine = runtime->deserializeCudaEngine((void*)plan.data(), plan.size(), nullptr);

Any pointers??

Thanks a lot!!