NVIDIA-AI-IOT/tf_to_trt_image_classification

Convert frozen graph to TensorRT engine issue

rahulsharma11 opened this issue · 8 comments

Hi, While conversion from frozen graph to TensorRT engine step in jetson TX2, it gives error-
Using output node InceptionV1/Logits/SpatialSqueeze
Converting to UFF graph
No. nodes: 486
UFF Output written to data/tmp.uff
UFFParser: Validator error: InceptionV1/Logits/SpatialSqueeze: Unsupported operation Squeeze
Failed to parse UFF

Any suggestion?
Thanks.

The squeeze operation is available in TensorRT3 (GA) which is provided by the latest JetPack 3.2 release. The version of TensorRT shipped with the JetPack 3.2 developer preview does not have this support.

You must install TensorRT on your Jetson TX2 with the latest JetPack 3.2. Hope this helps.

Hi,
There are some doubts regarding my checks-

  1. My Jetson TX2 have latest Jetpack3.2 and i downloaded the TensorRT3.0.4 from nvidia, its basically amd64 architecture based, as it throughs error while installing in Jetson TX2. How can i install that?

  2. I also check "dpkg -l | grep nvinfer" command which gives -
    ii libnvinfer-dev 4.0.0-1+cuda9.0 arm64 TensorRT development libraries and header
    ii libnvinfer-samples 4.0.0-1+cuda9.0 arm64 TensorRT samples and documentation
    ii libnvinfer4 4.0.0-1+cuda9.0 arm64 TensorRT runtime libraries
    Does that mean i have TensorRT4?

  3. I am checking grep parseSqueeze, which is giving no output to me. While in my Desktop, its giving me the valid output.

Typing

'dpkg -l | grep tensorrt'

should list the version of TensorRT. This is different than the nvinfer version. The version installed with the latest JetPack 3.2 is TensorRT 3.0.4.

TensorRT 3.0.4 is installed by using the JetPack executable. You should run the JetPack installer on a host machine to load the software to the Jetson TX2. If your Jetson is already flashed with the latest JetPack, you can deselect the option to flash the OS, and instead just install the TensorRT (and any dependent packages).

Hi @rahulsharma11 TensorRT must be installed from Jetpack, but the documentations says that you need to download TensorRT 3.0.4 to get the whl for the uff parser (even that is contained in a tar file that is exclusively for the amd64 architecture), so install tensorrt from jetpack and then download tensorrt from nvida, extract the uff package or just pip installed and remove the folder.

@jaybdub-nv Maybe adding the uff package to pip ? Or releasing the whl somewhere else as a alone whl for anyone to pip install??

Hi, @jaybdub-nv ,
Thanks for the reply.
By grepping tensorrt , it gives-
ii nv-tensorrt-repo-ubuntu1604-pipecleaner-cuda9.0-trt3.0-20171116 1-1 arm64 nv-tensorrt repository configuration files
ii tensorrt 3.0.0-1+cuda9.0 arm64 Meta package of TensorRT

So, i guess its having tensorRT3. Is tensorRT3.0.0 is not compatible? Will it only work with 3.0.4?

Support for the squeeze operation was added after TensorRT3.0.0 so it will only work with TensorRT 3.0.4.

Ok, thanks @jaybdub-nv and @Davidnet.