fabio-sim/LightGlue-ONNX

onnxruntime TRT error

1320414730 opened this issue · 8 comments

我在用onnxruntime的trt支持运行onnx时报错
2023-09-28 15:18:58.706230695 [W:onnxruntime:Default, tensorrt_execution_provider.h:63 log] [2023-09-28 07:18:58 WARNING] nx_tensorrt-src/onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
2023-09-28 15:18:58.706368570 [W:onnxruntime:Default, tensorrt_execution_provider.h:63 log] [2023-09-28 07:18:58 WARNING] nx_tensorrt-src/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
2023-09-28 15:18:58.706658696 [W:onnxruntime:Default, tensorrt_execution_provider.h:63 log] [2023-09-28 07:18:58 WARNING] nx_tensorrt-src/onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
进程已结束,退出代码139
能帮我分析下什么原因吗,我看晚上有用onnxsimpl简化的,但我在简化时会报错,用的是您提供的onnx文件

您好 @1320414730,感谢您对LightGlue-ONNX的兴趣。

INT64->INT32 应该不是问题。代码139好像是关于shape inference的报错。我也试过用onnx-simplifier,但没有成功。

您好,我按照您的tensorrt步骤进行的,在终端运行报错segmentation fault,调试窗口返回139,您有什么建议吗

版本:

WSL2 Ubuntu 22.04
Python 3.10.13
CUDA 11.8
CUDNN 8.9.4
TensorRT 8.6.1
numpy==1.24.1
onnxruntime-gpu==1.16.0
opencv-python==4.8.0.76
matplotlib==3.8.0

模型:superpoint.onnxsuperpoint_lightglue_fused_fp16.onnx
运行:

python infer.py --viz --extractor_type superpoint --extractor_path weights/superpoint.onnx --lightglue_path weights/superpoint_lightglue_fused_fp16.onnx --img_paths assets/sacre_coeur1.jpg assets/sacre_coeur2.jpg --trt

对,v0.1.3的end2end模型有六个输出,而v1.0.0的有四个。

非常感谢,我会关闭这个讨论