ONNX Runtime is a performance-focused complete scoring engine for Open Neural Network Exchange (ONNX) models, with an open extensible architecture to continually address the latest developments in AI and Deep Learning. ONNX Runtime stays up to date with the ONNX standard and supports all operators from the ONNX v1.2+ spec with both forwards and backwards compatibility. Please refer to this page for ONNX opset compatibility details.
ONNX is an interoperable format for machine learning models supported by various ML and DNN frameworks and tools. The universal format makes it easier to interoperate between frameworks and maximize the reach of hardware optimization investments.
Setup
Usage
More Info
ONNX Runtime provides comprehensive support of the ONNX spec and can be used to run all models based on ONNX v1.2.1 and higher. See version compatibility details here.
Traditional ML support
In addition to DNN models, ONNX Runtime fully supports the ONNX-ML profile of the ONNX spec for traditional ML scenarios.
For the full set of operators and types supported, please see operator documentation
Note: Some operators not supported in the current ONNX version may be available as a Contrib Operator
ONNX Runtime supports both CPU and GPU. Using various graph optimizations and accelerators, ONNX Runtime can provide lower latency compared to other runtimes for faster end-to-end customer experiences and minimized machine utilization costs.
Currently ONNX Runtime supports the following accelerators:
-
MLAS (Microsoft Linear Algebra Subprograms)
-
NVIDIA CUDA
-
Intel MKL-ML
-
ACL (in preview, for ARM Compute Library)
Not all variations are supported in the official release builds, but can be built from source following these instructions.
We are continuously working to integrate new execution providers for further improvements in latency and efficiency. If you are interested in contributing a new execution provider, please see this page.
ONNX Runtime is currently available for Linux, Windows, and Mac with Python, C#, C++, and C APIs. Please see API documentation and package installation.
If you have specific scenarios that are not supported, please share your suggestions and scenario details via Github Issues.
Quick Start: The ONNX-Ecosystem Docker container image is available on Dockerhub and includes ONNX Runtime (CPU, Python), dependencies, tools to convert from various frameworks, and Jupyter notebooks to help get started.
Additional dockerfiles can be found here.
CPU (MLAS+Eigen) | CPU (MKL-ML) | GPU (CUDA) | |
---|---|---|---|
Python | pypi: onnxruntime Windows (x64) Linux (x64) Mac OS X (x64) |
-- | pypi: onnxruntime-gpu Windows (x64) Linux (x64) |
C# | Nuget: Microsoft.ML.OnnxRuntime Windows (x64, x86) Linux (x64, x86) Mac OS X (x64) |
Nuget: Microsoft.ML.OnnxRuntime.MKLML Windows (x64) Linux (x64) Mac OS X (x64) |
Nuget: Microsoft.ML.OnnxRuntime.Gpu Windows (x64) Linux (x64) |
C/C++ wrapper | Nuget: Microsoft.ML.OnnxRuntime .zip, .tgz Windows (x64, x86) Linux (x64, x86) Mac OS X (x64) |
Nuget: Microsoft.ML.OnnxRuntime.MKLML Windows (x64) Linux (x64) Mac OS X (x64) |
Nuget: Microsoft.ML.OnnxRuntime.Gpu .zip, .tgz Windows (x64) Linux (x64) |
- ONNX Runtime binaries in the CPU packages use OpenMP and depend on the library being available at runtime in the
system.
- For Windows, OpenMP support comes as part of VC runtime. It is also available as redist packages: vc_redist.x64.exe and vc_redist.x86.exe
- For Linux, the system must have libgomp.so.1 which can be installed using
apt-get install libgomp1
.
- GPU builds require CUDA runtime libraries being installed on the system:
- Version: CUDA 10.0 and cuDNN 7.6
- Older ONNX Runtime releases: used CUDA 9.1 and cuDNN 7.1 - please refer to prior release notes for more details.
- Python binaries are compatible with Python 3.5-3.7. See Python Dev Notes. If using
pip
to be download the Python binaries, runpip install --upgrade pip
prior to downloading. - Certain operators makes use of system locales. Installation of the English language package and configuring
en_US.UTF-8 locale
is required.- For Ubuntu install language-pack-en package
- Run the following commands:
locale-gen en_US.UTF-8
update-locale LANG=en_US.UTF-8
- Follow similar procedure to configure other locales on other platforms.
If additional build flavors and/or dockerfiles are needed, please find instructions at Build ONNX Runtime. For production scenarios, it's strongly recommended to build only from an official release branch.
- The ONNX Model Zoo has popular ready-to-use pre-trained models.
- To export or convert a trained ONNX model trained from various frameworks, see ONNX Tutorials. Versioning compatibility information can be found under Versioning
- Other services that can be used to create ONNX models include:
ONNX Runtime can be deployed to the cloud for model inferencing using Azure Machine Learning Services. See detailed instructions and sample notebooks.
ONNX Runtime Server (beta) is a hosted application for serving ONNX models using ONNX Runtime, providing a REST API for prediction. Usage details can be found here, and image installation instructions are here.
The expanding focus and selection of IoT devices with sensors and consistent signal streams introduces new opportunities to move AI workloads to the edge.
This is particularly important when there are massive volumes of incoming data/signals that may not be efficient or useful to push to the cloud due to storage or latency considerations. Consider: surveillance tapes where 99% of footage is uneventful, or real-time person detection scenarios where immediate action is required. In these scenarios, directly executing model inferencing on the target device is crucial for optimal assistance.
To deploy AI workloads to these edge devices and take advantage of hardware acceleration capabilities on the target device, see these reference implementations.
ONNX Runtime packages are published to PyPi and Nuget (see Official Builds and/or can be built from source for local application development. Find samples here using the C++ API.
On newer Windows 10 devices (1809+), ONNX Runtime is available by default as part of the OS and is accessible via the Windows Machine Learning APIs. Find tutorials here for building a Windows Desktop or UWP application using WinML.
ONNX Runtime is open and extensible, supporting a broad set of configurations and execution providers for model acceleration. For performance tuning guidance, please see this page.
To tune performance for ONNX models, the ONNX Go Live tool "OLive" provides an easy-to-use pipeline for converting models to ONNX and optimizing performance for inferencing with ONNX Runtime.
- Add a custom operator/kernel
- Add an execution provider
- Add a new graph transform
- Add a new rewrite rule
This project may collect usage data and send it to Microsoft to help improve our products and services. See the privacy statement for more details.
We welcome contributions! Please see the contribution guidelines.
For any feedback or to report a bug, please file a GitHub Issue.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.