fabio-sim/LightGlue-ONNX

error when convert tensorRt engine model

long-senpai opened this issue · 4 comments

Hello, thank you for the great work about Lightglue Onnx. I want to convert the Onnx model to tensorRt to run in the c++ application. Howerver, i got this errors when.i used the trt_infer.py to build the engine file.
`[02/06/2024-19:26:59] [TRT] [W] onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/06/2024-19:26:59] [TRT] [W] onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[02/06/2024-19:26:59] [TRT] [E] ModelImporter.cpp:726: While parsing node number 177 [LayerNormalization -> "/transformers.0/self_attn/ffn/ffn.1/LayerNormalization_output_0"]:
[02/06/2024-19:26:59] [TRT] [E] ModelImporter.cpp:727: --- Begin node ---
[02/06/2024-19:26:59] [TRT] [E] ModelImporter.cpp:728: input: "/transformers.0/self_attn/ffn/ffn.0/Add_output_0"
input: "transformers.0.self_attn.ffn.1.weight"
input: "transformers.0.self_attn.ffn.1.bias"
output: "/transformers.0/self_attn/ffn/ffn.1/LayerNormalization_output_0"
name: "/transformers.0/self_attn/ffn/ffn.1/LayerNormalization"
op_type: "LayerNormalization"
attribute {
name: "axis"
i: -1
type: INT
}
attribute {
name: "epsilon"
f: 1e-05
type: FLOAT
}

[02/06/2024-19:26:59] [TRT] [E] ModelImporter.cpp:729: --- End node ---
[02/06/2024-19:26:59] [TRT] [E] ModelImporter.cpp:731: ERROR: builtin_op_importers.cpp:5427 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
In node 177 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Traceback (most recent call last):
File "trt_infer.py", line 119, in
build_engine(model_path, output_path)
File "trt_infer.py", line 30, in build_engine
raise Exception
Exception`

My system information:
Jetson Orin Jetpack 5.1.1 Cuda 11.4 and TensorRT version is 8.5.2.2.
i installed all the requirements following the requirement file in your repo.
Can you give me some advice?

Hi @long-senpai, thank you for your interest in LightGlue-ONNX.

I think this is due to LayerNormalization being available only after TensorRT 8.6.

Hi @long-senpai, thank you for your interest in LightGlue-ONNX.

I think this is due to LayerNormalization being available only after TensorRT 8.6.

I use TensorRT 8.6.1, but also have same error.

anyone found a solution to this issue?
my specs:
" === Device Information ===
[06/23/2024-09:57:36] [I] Selected Device: Orin
[06/23/2024-09:57:36] [I] Compute Capability: 8.7
[06/23/2024-09:57:36] [I] SMs: 8
[06/23/2024-09:57:36] [I] Compute Clock Rate: 0.624 GHz
[06/23/2024-09:57:36] [I] Device Global Memory: 6480 MiB
[06/23/2024-09:57:36] [I] Shared Memory per SM: 164 KiB
[06/23/2024-09:57:36] [I] Memory Bus Width: 64 bits (ECC disabled)
[06/23/2024-09:57:36] [I] Memory Clock Rate: 0.624 GHz
[06/23/2024-09:57:36] [I]
[06/23/2024-09:57:36] [I] TensorRT version: 8.5.2
[06/23/2024-09:57:37] [I] [TRT] [MemUsageChange] Init CUDA: CPU +220, GPU +0, now: CPU 249, GPU 3192 (MiB)
[06/23/2024-09:57:40] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +302, GPU +285, now: CPU 574, GPU 3498 (MiB)
[06/23/2024-09:57:40] [I] Start parsing network model
[06/23/2024-09:57:40] [I] [TRT] ----------------------------------------------------------------
[06/23/2024-09:57:40] [I] [TRT] Input filename: superpoint_640x480_inferred.onnx
[06/23/2024-09:57:40] [I] [TRT] ONNX IR version: 0.0.8
[06/23/2024-09:57:40] [I] [TRT] Opset version: 17
[06/23/2024-09:57:40] [I] [TRT] Producer name: pytorch
[06/23/2024-09:57:40] [I] [TRT] Producer version: 2.3.1
[06/23/2024-09:57:40] [I] [TRT] Domain:
[06/23/2024-09:57:40] [I] [TRT] Model version: 0
[06/23/2024-09:57:40] [I] [TRT] Doc string: "

Released ONNX files with better shapes support in releases