ltkong218/FastFlowNet

Export to TensorRT

anenbergb opened this issue · 2 comments

A few questions about exporting FastFlowNet to TensorRT for inference on the Xavier TX2.

  • Which version of TensorRT was used to test FastFlowNet on the Xavier TX2?
  • Which toolkit or opensource project did you use to export FastFlowNet from pytorch to TensorRT? Did you use torch2trt?

I use TensorRT 6.0 to test FastFlowNet on the Jeston TX2.
I write CUDA kernel and TensorRT plugin of Center Dense Dilated Correlation layer and Warping layer, then I use CMake to compile them into dynamic link library (libplugin.so). Next, I implement FastFlowNet with TensorRT without using torch2trt, and load the Center Dense Dilated Correlation layer and Warping layer from above compiled dynamic link library, and add them to the network as custom plugin layers. Finally, I load the PyTorch pretrained parameters to the TensorRT model.

I use TensorRT 6.0 to test FastFlowNet on the Jeston TX2. I write CUDA kernel and TensorRT plugin of Center Dense Dilated Correlation layer and Warping layer, then I use CMake to compile them into dynamic link library (libplugin.so). Next, I implement FastFlowNet with TensorRT without using torch2trt, and load the Center Dense Dilated Correlation layer and Warping layer from above compiled dynamic link library, and add them to the network as custom plugin layers. Finally, I load the PyTorch pretrained parameters to the TensorRT model.

感谢您的开源工作!我是一个深度学习的新手,我想请教一下,在您的模型中使用了cuda编程,那么是不是没法将这个模型直接转化成onnx模型了呢?我的gpu不是3060Ti,想要部署这个模型的tensorrt,是不是只能按照您的方法自己重写tensorrt模型?不同cuda型号需要对tensorrt模型有很大改动吗?