NVIDIA-AI-IOT/yolov5_gpu_optimization

Can I run this yolov5 in Python samples?

OctaM opened this issue · 3 comments

OctaM commented

I am using Python sample apps and I want to use this model. Do I have to follow the exact same steps?

  1. Converting the model to onnx
  2. Compile the plugin and deepstream parser?

If you only want to run python sample, you don't need the decode plugin.

Follow the steps of TensorRT sample:
https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization#tensorrt-sample

Just comment this line to avoid the plugin loading:
https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization/blob/main/tensorrt-sample/yolov5_trt_inference.py#L16

OctaM commented

Thanks for the answer @Tyler-D.

What about those samples? https://github.com/NVIDIA-AI-IOT/deepstream_python_apps

Can I run it inside those? And if yes, should I also remove the decode plugin?

I've been trying to run it inside an app with this config https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization/blob/main/deepstream-sample/config/config_infer_primary_yoloV5.txt.

I also compiled the plugin and generated yolov5_decode.so but when I run the sample it doesn't show any error just Segmentation fault (core dumped).

OctaM commented

Update: I was able to do this by following https://github.com/marcoslucianops/DeepStream-Yolo