fabio-sim/LightGlue-ONNX

the model which set depth_confidence && width_confidence can't run by infer.py.

Closed this issue · 1 comments

In export.py, I use: lightglue = LightGlue(extractor_type, flash=flash, depth_confidence=0.95, width_confidence=0.99).eval() to build .onnx
when I use this .onnx in infer.py, it can't run normally.

E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Squeeze node. Name:'/Squeeze_3' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/squeeze.h:52 static onnxruntime::TensorShapeVector onnxruntime::SqueezeBase::ComputeOutputShape(const onnxruntime::TensorShape&, const TensorShapeVector&) input_shape[i] == 1 was false. Dimension of input 1 must be 1 instead of 2. shape={2045,2}

Whether can I set depth_confidence && width_confidence in export.py???
Or the onnxruntime I used is something wrong?

Hi @demonove , thank you for your interest in LightGlue-ONNX.

The depth/width confidence parameters induce dynamic/data-dependent control flow in LightGlue (a for-loop with a variable number of iterations which depend on the input itself). The current tracing-based torch.onnx.export() cannot capture this adaptive behaviour, where 'easier' images only have to go through fewer layers (e.g., L=3-4), while 'harder' images need to pass through more layers (e.g., L=7-9).

I hope the upcoming torch.export() path supports this use case though.