TPAT and TRT - no kernel image is available for execution on the device
qraleq opened this issue · 1 comments
Hi,
I've successfully converted a model into TensorRT using TPAT generated plugin using the following command:
/usr/src/tensorrt/bin/trtexec --onnx=model_batch1_tpat.onnx --saveEngine=model.plan --buildOnly --verbose --fp16 --workspace=6000 --explicitBatch --noTF32 --plugins="tpat_onehot.so"
but after running trtexec
test using this command:
/usr/src/tensorrt/bin/trtexec --loadEngine=model.plan --verbose --workspace=6000 --plugins="./tpat_onehot.so"
I'm getting the following errors:
[06/02/2022-02:55:34] [E] [TRT] ../rtExt/cuda/cudaPluginV2DynamicExtRunner.cpp (108) - Cuda Error in execute: 209 (no kernel image is available for execution on the device)
[06/02/2022-02:55:34] [E] [TRT] FAILED_EXECUTION: std::exception
I managed to get one TPAT plugin for tpat_onehot.so
which doesn't throw this error, but I don't see any difference in the way I generated the plugins. Is there something about the non-deterministic process of generating a plugin using TVM that can cause this behavior?
Thank you!
It looks like the error is caused by your environment.
https://forums.developer.nvidia.com/t/tensorrt-no-kernel-image-is-available-for-execution-on-the-device-error-48-hex-0x30/62307
https://forums.developer.nvidia.com/t/runtimeerror-cuda-error-no-kernel-image-is-available-for-execution-on-the-device/167708
may you can modify the 'CUDA_PATH' in TPAT/python/trt_plugin/Makefile.