chaiNNer-org/chaiNNer

DirectML Execution Provider for ONNX Runtime

Opened this issue · 1 comments

Motivation
After installing ChaiNNer and ONNX Runtime, I don't see an option to run it on AMD/Intel GPUs, which I assume is because the DirectML Execution Provider isn't available.

Description
It would be nice to have the option of using AMD/Intel GPUs with ONNX Runtime on ChaiNNer.

Alternatives
Currently, the best alternative would be using NCNN, but support isn't 1:1 and some operations are unsupported. You can also run ORT on the CPU, but that's much slower depending on the model.

i'll look into how easy it would be to add this