fabio-sim/LightGlue-ONNX

Running inference using exported models in C++ very unstable/non-deterministic

Closed this issue · 2 comments

Has anyone found that using the models in C++ is very unstable and non-deterministic?

I've found that using the model via infer.py is very solid and produces the same inference result each time. However, when using that same model in C++ via Onnx, the model becomes incredibly unstable and produces differing results. What I mean by this is that most of the time you will get nothing returned whatsoever and some of the time you will get nearly the same results as the python script.

This could well be an issue in the Onnx C++ wrapper but I wanted to see if others have seen this issue and whether there are any fixes?

Can you share your output results and part of the program? When I use C++ inference, the results are relatively stable.

I think I've found the problem. My onnx runtime in python is 1.16.0 but 1.15.1 in C++. Will change versions and post an update