grimoire/mmdetection-to-tensorrt

How to export a model with only batched NMS layer

Opened this issue · 5 comments

Hi, thanks for your amazing work!

I want to export a model with only "BatchedNMS" (located in post_processing/batched_nms.py) for some reason. In other words, I just want to speed up the process of NMS through tensorRT.
Could you please tell me how to organize the config file, or how to export it directly?

Thank you!

Hi
If you want to convert batched_nms.py in this repo, please try follow steps:
import this project to load the convertor

import mmdet2trt

create the layer manually

layer = mmdet2trt.core.post_processing.batched_nms.BatchedNMS(....)

convert the layer with torch2trt_dynamic

torch2trt_dynamic(layer, dummy_input, ....)

Hi
If you want to convert batched_nms.py in this repo, please try follow steps:
import this project to load the convertor

import mmdet2trt

create the layer manually

layer = mmdet2trt.core.post_processing.batched_nms.BatchedNMS(....)

convert the layer with torch2trt_dynamic

torch2trt_dynamic(layer, dummy_input, ....)

Thanks for your reply.

Do I need to warmup the model like what you did in function "mmdet2trt"?

Here's the quote:

with torch.no_grad():

result = wrap_model(dummy_input)`

Nope.
I put the warmup code here to initial some values inside the module(Actually It is useless for now). Just convert the model directly.

twmht commented

@grimoire

What did you modify for batched NMS layer? the plugin you provide is very similar to TensorRT(https://github.com/NVIDIA/TensorRT/tree/main/plugin/batchedNMSPlugin)?

@twmht I do not remember all details.
One important difference is that in official implementation the offset is 1 for intersect_width and intersect_height. And in my modification it will be 0 so the nms will be aligned with the one in mmcv.
And an extra dimension has been added to the first output to support DeepStream. TensorRT OSS has done the same now.