fabio-sim/LightGlue-ONNX

ONNX opset version 12 is not supported

Closed this issue · 4 comments

Deploying the model to another embedded platform, but the embedded platform only supports opset <=12, so setting the opset_version in export.py to 12 gets the error.

Hi @valenbase , thank you for your interest in LightGlue-ONNX!

There were several reasons as to why opset_version=16 is used as the default export:

  • SuperPoint uses the GridSample operation, for which support was introduced in version 16
  • ONNX support for aten::scaled_dot_product_attention in LightGlue requires version 14
  • LightGlue has now been updated to use unflatten operations in the original repo, requiring version 13; this fork still keeps the einops.rearrange, but I'm not sure when I'll be syncing this fork with those changes.

However, if you don't mind using DISK instead of SuperPoint, I managed to export the DISK extractor and LightGlue to opset version 12 (with --dynamic):

I hope you find these models helpful!

thank you!

@fabio-sim can you give me the model for --end2end

Here you go:

In case anyone would like to reproduce LightGlue ONNX with opset 12 compatibility, these are the changes required (using release v0.1.3 as a reference point):

  1. Modify LightGlue's Attention module forward to always use the einsum branch instead of the scaled_dot_product_attention branch (e.g., by removing the if branch):
    def forward(self, q, k, v) -> torch.Tensor:
    if (
    hasattr(F, "scaled_dot_product_attention")
    and not torch.is_autocast_enabled()
    ):
    q, k, v = [x.contiguous() for x in [q, k, v]]
    return F.scaled_dot_product_attention(q, k, v)
    else:
    s = self.s
    attn = F.softmax(torch.einsum("bnid,bnjd->bnij", q, k) * s, -1)
    return torch.einsum("bnij,bnjd->bnid", attn, v)
  2. Modify the opset_version in export.py from 16 to 12. Note that there are multiple occurrences of torch.onnx.export().
  3. Run the export:
python export.py --extractor_type disk --dynamic --end2end