tensorflow/tensorrt

How to test if my TensorFlow has TensorRT?

MachineJeff opened this issue · 10 comments

I have installed tensorflow with pip

pip install tensorflow-gpu==1.14.0

Then how to test TensorRT exists?

Run the following to check the linked and loaded TensorRT versions

from tensorflow.compiler.tf2tensorrt.wrap_py_utils import get_linked_tensorrt_version
from tensorflow.compiler.tf2tensorrt.wrap_py_utils import get_loaded_tensorrt_version

print(f"Linked TensorRT version {get_linked_tensorrt_version()}")
print(f"Loaded TensorRT version {get_loaded_tensorrt_version()}")

If TensorRT is linked and loaded you should see something like this:

Linked TensorRT version (5, 1, 5)
Loaded TensorRT version (5, 1, 5)

Otherwise you'll just get (0, 0, 0)

I don't think the pip version is compiled with TensorRT. You have to either build TensorRT and enable it at Tensorflow compile time or use one of Nvidia's Tensorflow Docker container images which already comes with TensorRT.

@panchgonzalez Thank you very much! It's very nice of you!

from tensorflow.compiler.tf2tensorrt.wrap_py_utils import get_linked_tensorrt_version
from tensorflow.compiler.tf2tensorrt.wrap_py_utils import get_loaded_tensorrt_version

print(f"Linked TensorRT version {get_linked_tensorrt_version()}")
print(f"Loaded TensorRT version {get_loaded_tensorrt_version()}")


ModuleNotFoundError Traceback (most recent call last)
in
----> 1 from tensorflow.compiler.tf2tensorrt.wrap_py_utils import get_linked_tensorrt_version
2 from tensorflow.compiler.tf2tensorrt.wrap_py_utils import get_loaded_tensorrt_version
3
4 print(f"Linked TensorRT version {get_linked_tensorrt_version()}")
5 print(f"Loaded TensorRT version {get_loaded_tensorrt_version()}")

ModuleNotFoundError: No module named 'tensorflow.compiler.tf2tensorrt'

tensorflow version?

tensorflow version?

tensorflow version = 2.3.1
CUDA version = 10.1 update 2
cuDNN version = 7.6.5 (Nov 18th 2019)
tensorRT version = 6.0.5

I have the same error, I installed in my Windows 10 system with Nvidia GTX1650 (supports CUDA)
I followed the instruction provided on TensorRT 6 installation guide.
I followed it till the last step (i.e. step 6).
But now when I run your piece of code on my Pycharm IDE (Community ver 2020.2.1) with the virtual conda interpreter in which I installed TensorFlow-GPU, it shows the following error message:-

2020-09-27 14:48:12.637327: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic 
library cudart64_101.dll
Traceback (most recent call last):
  File "D:/My Python stuff/TEST/Test.py", line 1, in <module>
    from tensorflow.compiler.tf2tensorrt.wrap_py_utils import get_linked_tensorrt_version
ModuleNotFoundError: No module named 'tensorflow.compiler.tf2tensorrt'

Process finished with exit code 1

I checked it earlier, my TensorFlow-GPU is installed properly.

Here a case where TensorRT is not installed:

$ docker run --rm -it docker.io/nvidia/cuda:11.5.1-cudnn8-runtime-ubuntu20.04 bash
$ apt-get install -y python3 python3-pip
$ python3 -m pip install tensorflow
$ python3
>>> import tensorflow
>>> tensorflow.__version__
'2.8.0'
>>> from tensorflow.python.compiler.tensorrt import trt_convert as trt
>>> trt.trt_utils._pywrap_py_utils.get_linked_tensorrt_version()
(7, 2, 2)
>>> trt.trt_utils._pywrap_py_utils.get_loaded_tensorrt_version()
2022-03-24 08:59:15.415733: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64

To install the right version of libnvinfer, you can follow Tensorflow's GPU Dockerfile:
https://github.com/tensorflow/tensorflow/blob/v2.8.0/tensorflow/tools/dockerfiles/dockerfiles/gpu.Dockerfile#L61-L70