fabio-sim/LightGlue-ONNX

Result diffrent from the original repository?

1191658517 opened this issue · 3 comments

Have you ever noticed that the output results of the optimized onnx model may be slightly different from those in the original repository(https://github.com/cvg/LightGlue)? As shown below,the gray one is the onnx model result.what may result in such diffrence? Are there any parameters that need to be adjusted to align the result with the original repository? grateful if you could answer it.
20240126-162828
20240126-162816

Hi @1191658517, thank you for your interest in LightGlue-ONNX.

I can think of a couple of reasons why the results may differ:

  1. Different Operator Implementations: ONNXRuntime may implement some operators differently compared to PyTorch, resulting in slight differences which can accumulate over the forward pass.

  2. Adaptive vs. Non-adaptive: The original PyTorch model implements an adaptive forward pass, namely: point pruning and early stopping (dynamic depth). To explain this, LightGlue consists of 9 transformer layers; the adaptive version can choose to stop early before going through all 9 layers, for example, at only 5 layers, depending on the difficulty of the inputs. On the other hand, since ONNX does not support exporting dynamic control flow easily, the ONNX model always goes through all 9 transformer layers in the forward pass.

Thanks for your reply! So the only way to keep the results unchanged is to use the native pytorch version rather than onnxruntime?

Yes, if you have access to Python in your environment, then using the native PyTorch version will be consistent.