Can the model be converted into ONNX to accelerate inference?
Closed this issue · 4 comments
Hello, @shubham-goel @geopavlakos
Currently, inference is too slow. Is there any way to achieve real-time 30fps?
Which aspect are you mostly interested in? The HMR2.0 network? The general single-image demo? Or the video tracking code?
Hi @geopavlakos
I am very interested in video tracking for HMR2.0, so I want to achieve real-time camera tracking. Can use ONNXRuntime for real-time run or other methods to accelerate it? Thank you
If you want to use only the HMR2.0 network, that should require ~32ms for a single forward pass on an RTX 3090 (potentially faster on more recent hardware). If the HMR2.0 network is part of a more elaborate pipeline (e.g., supporting video/camera tracking), then there are more components to be considered. I'm not familiar with ONNX, so I cannot tell for sure if that would help.
Hi, @geopavlakos, what time will the general single-image demo take? Can this pipeline achieves 30 fps? Thank you.