ardianumam/Tensorflow-TensorRT

Custom TensorFlow-Yolov3 to TF-TRT

Opened this issue · 1 comments

Hi,
I am using the TensorFlow version of yolov3, it's not the same as the darknet, it used two yolov3 for feature extraction from visual and infrared images and then perform feature fusion and finally object detection.

My project can run in GTX 1080 Ti at about 40 FPS , but in Xavier NX the speed is 2 FPS.
Now my goal is to convert this TensorFlow model to onnx and trt engine to speed up in Xavier NX.
I have weights in both .ckpt and .pb format.
1-What steps I should follow? I am really confused, there are too many confusing articles about TF-TRT and TensorRT but no clear guidelines.
2-Do I need to use TF-TRT or TensorRT?
3-Can you give me a road map for this task? Is it helpful to speed up the detection in the Xavier NX?

I have spent about 15 days for trying by myself but failed, so finally I decided to post my question here. I hope you will guide me in this regard.

Thanks.

@ardianumam After several tries, I have successfully optimized this model but the model size and speed is the same. The number of nodes slightly reduces.

Can you please check the problem?
If I test the model on the GPU it should show the speed difference or it's only for the edge devices. i.e Jetson?
Screenshot from 2020-09-28 12-14-27