nvinfer1: Engine fails to deserialize from stream on other GPU
albert-grigoryan opened this issue · 0 comments
albert-grigoryan commented
System information
- 1st host (working configuration)
Docker: v19.03.8
NVIDIA driver: v440.64
TensorRT: v6.01 (fromnvcr.io/nvidia/tensorrt:19.12-py3
)
GPU: GeForce GTX 1060 Mobile - 2nd host (fail configuration)
Docker: v19.03.8
NVIDIA driver: v450.51.06 (similar behavior with same driver version was observed)
TensorRT: v6.01 (fromnvcr.io/nvidia/tensorrt:19.12-py3
)
GPU: Tesla K80
The same docker image works on the 1st host, but fails on the 2nd one (deserializeCudaEngine(...)
function returns null
):
$ cat details
Creating network "scylla-test_default" with the default driver
Creating vm_1 ... done
Attaching to vm_1
vm_1 |
vm_1 | =====================
vm_1 | == NVIDIA TensorRT ==
vm_1 | =====================
vm_1 |
vm_1 | NVIDIA Release 19.12 (build 9143065)
vm_1 |
vm_1 | NVIDIA TensorRT 6.0.1 (c) 2016-2019, NVIDIA CORPORATION. All rights reserved.
vm_1 | Container image (c) 2019, NVIDIA CORPORATION. All rights reserved.
vm_1 |
vm_1 | https://developer.nvidia.com/tensorrt
vm_1 |
vm_1 | To install Python sample dependencies, run /opt/tensorrt/python/python_setup.sh
vm_1 |
vm_1 | To install open source parsers, plugins, and samples, run /opt/tensorrt/install_opensource.sh. See https://github.com/NVIDIA/TensorRT/tree/19.12 for more information.
vm_1 |
vm_1 | NOTE: Legacy NVIDIA Driver detected. Compatibility mode ENABLED.
vm_1 |
...
vm_1 | Segmentation fault (core dumped)
vm_1 exited with code 139