fabio-sim/LightGlue-ONNX
ONNX-compatible LightGlue: Local Feature Matching at Light Speed. Supports TensorRT, OpenVINO
PythonApache-2.0
Issues
- 3
Adding support for SIFT
#71 opened by demplo - 3
error when convert tensorRt engine model
#65 opened by long-senpai - 1
- 1
".trt.onnx"export example
#70 opened by WYKXD - 3
False Positive Keypoints on uniform Images
#76 opened by DavideCatto - 16
- 1
convert SuperPoint from onnx to engine
#77 opened by midskymid - 4
how to convert onnx into an rknn model?
#72 opened by ouxiand - 8
Support for open version of Superpoint
#39 opened by adricostas - 3
UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 17
#75 opened by BayRanger - 4
Internal Error (/lightglue/ArgMax)
#66 opened by chenscottus - 4
Converting a trained model to ONNX
#63 opened by ikaftan - 2
Running inference using exported models in C++ very unstable/non-deterministic
#60 opened by will-kudan - 1
jetson
#53 opened by sushi31415926 - 2
- 1
- 1
ALIKED Support
#69 opened by mug1wara26 - 0
- 0
- 3
Result diffrent from the original repository?
#62 opened by 1191658517 - 2
NOT_IMPLEMENTED : Non-zero status code returned while running MultiHeadAttention node
#51 opened by dmoti - 0
Running inference throws a CUDA exception
#61 opened by laxnpander - 4
The output shape of lightglue's onnx model is dynamic. Does tensorrt support dynamic output?
#59 opened by weihaoysgs - 5
Build Superpoint onnx file to Tensorrt engine failed, encounter Error (Could not find any implementation for node {ForeignNode[/Flatten.../Transpose_3]}
#58 opened by weihaoysgs - 7
- 6
Is the lightglue run in the tensorrt mode, and how to using C++ inferface build engine?
#56 opened by weihaoysgs - 2
aliked
#55 opened by sushi31415926 - 1
support dynamic batch size
#54 opened by WalkerWen - 3
Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 17 is not supported.
#45 opened by Albert337 - 3
Issues with the deployment on web
#42 opened by adricostas - 5
- 6
Integration to kornia?
#40 opened by ducha-aiki - 4
use trtexec
#36 opened by kajo-kurisu - 1
How do I export without specifying an img size
#49 opened by SpenceraM - 2
Can't run on CPU
#44 opened by XL634663985 - 1
too many values to unpack (expected 2)
#38 opened by czy-1234 - 1
- 3
- 2
How to export FP32 model?
#35 opened by JingruiYu - 8
onnxruntime TRT error
#32 opened by 1320414730 - 3
Nvidia Tx2 is hard to use
#29 opened by demonove - 1
the model which set depth_confidence && width_confidence can't run by infer.py.
#27 opened by demonove - 1
SuperPoint inference tooks around 8s
#26 opened by goktugyildirim4d - 6
- 1
onnxruntime error
#21 opened by valenbase - 1
SuperPoint Mask Support
#25 opened by goktugyildirim4d - 2
- 1
Input Tensor /Gather_3_output_0 is unused
#22 opened by goktugyildirim4d - 3
License issue of superpoint + LightGlue onnx
#24 opened by QuanNguyen94 - 4
ONNX opset version 12 is not supported
#19 opened by valenbase