microsoft/onnxruntime-inference-examples
Examples for using ONNX Runtime for machine learning inferencing.
C++MIT
Issues
- 4
android app token to string issue
#484 opened by j0h0k0i0m - 1
I used yolo_e2e to convert YOLOv8n.pt to ONNX by myself, but the size of the bounding-box is not suitable
#499 opened by FengYanNMG - 0
QNN EP Sample - Fresh build with latest official QNN package produces errors on ARM64 Snapdragon QC PC
#497 opened by ivberg - 3
- 1
Android object identification: Non-zero status code returned while running Sub node
#492 opened by hubertwang - 1
how to deploy my own onnx yolov8 model?
#469 opened by Christian-lyc - 3
model is taken from the source: Yolov8 in extensions and with support for pre/postprocessing does not detect objects.
#416 opened by Pesekot1 - 6
- 0
- 2
- 0
OrtxPackage.getLibraryPath() crash
#463 opened by OrangeHao - 2
- 1
- 5
Fail to Build Phi-3 in android
#442 opened by Vinaysukhesh98 - 2
- 1
OnnxRuntime does not work when running on real iPhone - The type initializer for 'Microsoft.ML.OnnxRuntime.NativeMethods' threw an exception.
#455 opened by codingzerotohero - 1
Android - App crash when using Phi-3
#444 opened by XantaKross - 0
Issue: Error Loading 8 bit quntized ONNX Model on inference- Protobuf Parsing Failed
#453 opened by chakka12345677 - 0
Try to convert the original phi-3-mini-4k and put it in the provided APP, but get the exception.
#452 opened by scchiustone - 0
issue in quantised model generated response
#450 opened by ragesh2000 - 3
speed up phi-3 inference?
#428 opened by CHNtentes - 0
Cannot convert Llama-3-awq to ONNX
#448 opened by Toan-it-mta - 2
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running ReduceMax node. Name:'488_ReduceMax' Status Message:
#447 opened by fanshao123456 - 1
- 0
[New Project] Inference of Stable-Diffisuion on All Platform with ONNXRuntime
#446 opened by Windsander - 0
phi-3 vision onnxruntime on android
#445 opened by henrywang0314 - 0
Failed running on iPhone (14 Pro Max)
#443 opened by privateLLM2024 - 1
- 2
- 7
- 3
QNN C++ Example not working on Surface Pro 9
#404 opened by hschumanncvx - 0
The onnx model can't be loaded on the front end, which uses the react architecture
#427 opened by tanggang1997 - 1
Output image is weired while trying to inference esrgan, Someone help me please.
#423 opened by md-rifatkhan - 1
exporting the vit_b model with sam exporter ?
#422 opened by flores-o - 0
- 0
- 1
Quantization Onnx - AssertionError
#377 opened by katia-katkat - 3
Fail to quantize the Llama-2-7b-chat-hf model
#375 opened by igaspard - 4
error "QNN execution provider is not supported in this build" when testing ONNX QNN EP
#397 opened by shawnyxf - 2
- 1
- 0
InferenceSession on GPU
#411 opened by shimaamorsy - 1
- 1
About inference efficiency of ONNX with QNN EQ
#402 opened by shawnyxf - 0
c_cxx/customop seems code can't work in linux
#408 opened by viking714 - 2
Incorrect Responses for Llama-2-7b-hf with RTN/GPTQ INT4 Asymmetric Quantization
#381 opened by VishalX - 1
android app example with whisper_base_cpu_int8.onnx gives out of memory error
#382 opened by suyash-narain - 6
Running QNN C++ example with other models
#376 opened by hhschumann - 0
- 2