fabio-sim/LightGlue-ONNX

UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 17

Closed this issue · 3 comments

Hi, when trying to export onnx model, the error is shown as following, it seems the workflow works with toch model inference, but for onnx export, the scaled_dot_product_attention not works, any insights to solve this problem?

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 5>:5 │
│ │
│ 2 extractor_path = f"weights/{extractor_type}.onnx" │
│ 3 lightglue_path = f"weights/{extractor_type}_lightglue.onnx" │
│ 4 │
│ ❱ 5 export_onnx( │
│ 6 │ extractor_type=extractor_type, │
│ 7 │ extractor_path=extractor_path, │
│ 8 │ lightglue_path=lightglue_path, │
│ │
│ /mnt/data/github/LightGlue-ONNX/export.py:183 in export_onnx │
│ │
│ 180 │ │ kpts0 = normalize_keypoints(kpts0, image0.shape[1], image0.shape[2]) │
│ 181 │ │ kpts1 = normalize_keypoints(kpts1, image1.shape[1], image1.shape[2]) │
│ 182 │ │ │
│ ❱ 183 │ │ torch.onnx.export( │
│ 184 │ │ │ lightglue, │
│ 185 │ │ │ (kpts0, kpts1, desc0, desc1), │
│ 186 │ │ │ lightglue_path, │
│ │
│ /home/data/app/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py:506 in export │
│ │
│ 503 │ │ │ All errors are subclasses of :class:errors.OnnxExporterError. │
│ 504 │ """ │
│ 505 │ │
│ ❱ 506 │ _export( │
│ 507 │ │ model, │
│ 508 │ │ args, │
│ 509 │ │ f, │
│ │
│ /home/data/app/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py:1548 in _export │
│ │
│ 1545 │ │ │ │ dynamic_axes = {} │
│ 1546 │ │ │ _validate_dynamic_axes(dynamic_axes, model, input_names, output_names) │
│ 1547 │ │ │ │
│ ❱ 1548 │ │ │ graph, params_dict, torch_out = _model_to_graph( │
│ 1549 │ │ │ │ model, │
│ 1550 │ │ │ │ args, │
│ 1551 │ │ │ │ verbose, │
│ │
│ /home/data/app/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py:1117 in │
│ _model_to_graph │
│ │
│ 1114 │ params_dict = _get_named_param_dict(graph, params) │
│ 1115 │ │
│ 1116 │ try: │
│ ❱ 1117 │ │ graph = _optimize_graph( │
│ 1118 │ │ │ graph, │
│ 1119 │ │ │ operator_export_type, │
│ 1120 │ │ │ _disable_torch_constant_prop=_disable_torch_constant_prop, │
│ │
│ /home/data/app/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py:665 in _optimize_graph │
│ │
│ 662 │ │ _C._jit_pass_onnx_set_dynamic_input_shape(graph, dynamic_axes, input_names) │
│ 663 │ _C._jit_pass_onnx_lint(graph) │
│ 664 │ │
│ ❱ 665 │ graph = _C._jit_pass_onnx(graph, operator_export_type) │
│ 666 │ _C._jit_pass_onnx_lint(graph) │
│ 667 │ _C._jit_pass_lint(graph) │
│ 668 │
│ │
│ /home/data/app/miniconda3/lib/python3.9/site-packages/torch/onnx/utils.py:1901 in │
│ _run_symbolic_function │
│ │
│ 1898 │ │ │ # Clone node to trigger ONNX shape inference │
│ 1899 │ │ │ return graph_context.op(op_name, *inputs, **attrs, outputs=node.outputsSize( │
│ 1900 │ │ │
│ ❱ 1901 │ │ raise errors.UnsupportedOperatorError( │
│ 1902 │ │ │ symbolic_function_name, │
│ 1903 │ │ │ opset_version, │
│ 1904 │ │ │ symbolic_function_group.get_min_supported() │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 17 is
not supported. Please feel free to request support or submit a pull request on PyTorch GitHub:
https://github.com/pytorch/pytorch/issues.

I also tried to use different opset like 14, the error still the same :(

Your tensorrt version must be 8.6, I also meet this problem before.

Afhter some search, I think I figured out what happened, the reason is that pytorch provides a new implementaion of sdpa since version 2.0 which is not supported by onnx export. To solve this problem, you could either use torch1.x or rewrite the sdpa ops in the code,