Running custom models on up xtreme + MYRIAD VPU
mszuyx opened this issue · 0 comments
mszuyx commented
Hi,
Great project first of all! Big fan of the project!
I am currently working on deploying a custom model on a single board computer called up xtreme: https://up-board.org/up-xtreme/ (the i7 version). The model itself can be converted into intel IR from ONNX and be executed via openvino runtime on our desktop.
Our questions are:
- As the data acquisition is done using an intel RGB-D camera D455, we figure a purely intel setup would make things easier: D455 + UP xtreme + NCS2 / UP VISION PLUS X (https://up-shop.org/up-vision-plus-x.html). We are not sure if we should purchase inference accelerators since we don't know how to confirm if our model is compatible with NCS2 / MYRIAD devices. Is there a test we can perform to check that? Or can they run any model as long as they can be converted into intel IR?
- We have some CPU intensive programs that need to be run along with the model inference, would the addition of inference accelerators "off load" the burden of inference from CPU?
- We did our development and benchmark on desktop, but we would like to deploy the model on the single board computer and run along side with other ROS package. What is a recommended workflow? Shall we install openvino library similarly to what we did on the desktop, and then ros-openvino? Or the ros-openvino can handle IR files directly such that a full installation of openvino is not necessary? (we want to keep the env on the single board computer as light as possible)
- Is there an example we can follow to "implement" our custom model in ros-openvino? So far I am still not clear how to load a custom IR to ros-openvino framework.
Thanks! : )