tensorflow/tensorrt

TRT model fails to load on Jetson

gan3sh500 opened this issue · 1 comments

I was getting the below error when I load model on Jetson with Jetpack4.4 which was converted to TRT using tf2.2 from pip.
Even now with tf master build from source and converted model using CUDA 10.2 CUDNN 8.0.0 TRT 7.1.3 same error.
Jetpack 4.4 has same version except TRT 7.1.0. I used 7.1.3 as it was what I could find on apt. I built using image from modifying these versions into the tf devel-gpu Dockerfile from here.

2020-08-01 04:41:53.419342: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:42] DefaultLogger coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match) 2020-08-01 04:41:53.434021: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:42] DefaultLogger INVALID_STATE: std::exception 2020-08-01 04:41:53.434088: E tensorflow/compiler/tf2tensorrt/utils/trt_logger.cc:42] DefaultLogger INVALID_CONFIG: Deserialize the cuda engine failed. Fatal Python error: Segmentation fault

What version tag is it referring to and how can I fix this?

Issue with portability was due to GPU Architecture. To make it work just need to use same as Jetson. For example Jetson Nano and 1080Ti together.