RidgeRun/gst-inference

What's difference between python and c++ plugin in the infernece

PythonImageDeveloper opened this issue · 2 comments

Hi all,
I want to know If we write the inference plugin for deep learning models with python or c++, what's difference in the speed up?
All we know, the c/c++ language has more speed than python, and this is right when we want to implement the algorithms from scratch, because we want to impelement inference codes of model that has c/c++ backend but python interface, Writing python gstreamer plugin but running the main codes with c/c++ backend(like tensorflow), How different are their speeds(c/c++ plugin and python plugin)?

Hi @PythonImageDeveloper. I do not have concrete numbers to give you, but I agree with your statement: if the algorithms are written in C/C++ underneath, having a python interface shouldn't be a significant bottleneck. This will be true as long as you can transparently share memory between C/C++ and Python. As a reference, we have some elements written in Python that perform inference underneath (PyTorch in our case) and it works fine.

In the specific case of GstInference, we prefer to stick to a lower level for portability purposes. It is easy for a user to write an app in Python, Rust, C#, if we are in C/C++. It is not true the other way around.

Hi @michaelgruner ,
Thanks your suggestion.