/ONNXRunTime.jl

Julia bindings for onnxruntime

Primary LanguageJuliaMIT LicenseMIT

ONNXRunTime

Stable Dev Build Status Coverage

ONNXRunTime provides inofficial julia bindings for onnxruntime. It exposes both a low level interface, that mirrors the official C-API, as well as an high level interface.

Contributions are welcome.

Usage

The high level API works as follows:

julia> import ONNXRunTime as OX

julia> path = OX.testdatapath("increment2x3.onnx"); # path to a toy model

julia> model = OX.load_inference(path);

julia> input = Dict("input" => randn(Float32,2,3))
Dict{String, Matrix{Float32}} with 1 entry:
  "input" => [1.68127 1.18192 -0.474021; -1.13518 1.02199 2.75168]

julia> model(input)
Dict{String, Matrix{Float32}} with 1 entry:
  "output" => [2.68127 2.18192 0.525979; -0.135185 2.02199 3.75168]

For GPU usage simply do:

pkg> add CUDA

julia> import CUDA

julia> OX.load_inference(path, execution_provider=:cuda)

The low level API mirrors the offical C-API. The above example looks like this:

using ONNXRunTime.CAPI

api = GetApi()
env = CreateEnv(api, name="myenv")
so = CreateSessionOptions(api)
path = ONNXRunTime.testdatapath("increment2x3.onnx")
session = CreateSession(api, env, path)
mem = CreateCpuMemoryInfo(api)
input_array = randn(Float32, 2,3)
input_tensor = CreateTensorWithDataAsOrtValue(api, mem, input_array)
run_options = CreateRunOptions(api)
input_names = ["input"]
output_names = ["output"]
inputs = [input_tensor]
outputs = Run(api, session, run_options, input_names, inputs, output_names)
output_tensor = only(outputs)
output_array = GetTensorMutableData(api, output_tensor)

Alternatives