pytorch/ort

Question about supported device

Closed this issue · 2 comments

Hi, I find if I want to install torch-ort, I must install cuda dependency first. But in bert example, it also supports CPU running. So I wonder the request of cuda dependency for installation is necessary?

Most optimization techniques of ORT target large models. When running big model, we assume GPU is needed, so users need to install Pytorch with GPU support. Here is an example -- ORT for training always expects a Pytorch GPU allocator available.

@mengniwang95 Although it is possible to run using CPU only, ONNX Runtime does not prioritize it on its roadmap.
Many models will not be trainable on CPU only due to missing operators or because they are just to big for the poor CPU

Maybe you can try start with CPU only to get started and migrate to CUDA or ROCM when you need to train more complex models