locaal-ai/obs-backgroundremoval

PyTorch+ROCm for AMD Linux maybe?

reaperx7 opened this issue · 13 comments

Is there anyway we can get PyTorch with ROCm support added for AMD GPUs as an experimental alternative to Tensor-RT to help reduce CPU usage for AMD GPUs?

Many thanks.

rocm in general is supported by ONNX runtime: https://onnxruntime.ai/docs/execution-providers/ROCm-ExecutionProvider.html
i just don't have a solid way of testing this
also we would need to build onnxruntime with rocm support: https://onnxruntime.ai/docs/build/eps.html#amd-rocm
since the linux release for ORT doesn't include the rocm EP
image
only CUDA and TRT
so i'm not sure... we could certainly use some help w this

@royshil We can provide build instructions with ROCm rather than provide pre-built binaries.

@reaperx7 Are you so kind as to develop a ROCm-enabled version of our plugin? We will appreciate your contributions!

I would if I could, but I can't read code worth a penny.

@reaperx7 Do you have an environment to test our ROCm-enabled plugin if we provide you with build instructions?

Just dropping in to confirm I was able to build the plugin using the C/C++ API of the ROCm Execution Provider. All I needed to do was to modify ort-session-utils.cpp to add the ROCm EP to the ONNX Runtime session instead of the TensorRT EP, and to then to set the appropriate flag in CMake to use the system's own ONNX Runtime presumably built for ROCm.

@reaperx7 @payom I have implemented the initial ROCm support. Can you test this?
#545

@reaperx7 You must build the binary by yourself. We will not provide any pre-built things. Thank you!

Tried enabling plugin and got error and obs crashed. log file attached.
error.log

Source built with flags:

cmake -B build --preset linux-x86_64 -DENABLE_ROCM=ON

This issue is stale because it has been open for 60 days with no activity.