murdockhou/Yet-Another-EfficientDet-Pytorch-Convert-ONNX-TVM

postprocess in TVM?

zylo117 opened this issue · 4 comments

Great job on this convertion.

As we know, postprocessing takes time. If using TVM means leaving postprocessing to numpy or whatever, I think it will be slower than running on pytorch. So Is it possible to perform postprocess in TVM? Like anchor transform, threshold filter, nms? Or is there a better option?

Hi, we can not convert pytorch model to TVM when pytorch model contains anchors part. Because model need to be traced firstly just as code write. So anchors part need to split from model and run it separatly.

Meanwhile, the purpose of converting model to TVM is running it on C++, so TVM is a choose but not the best, you can try other framework like NCNN, TensorRT...

As far as we know, there is no ways to run anchors part on TVM, hope some people can fixed this.

ok, thanks for explaining.
hey, are you guys working on tensorrt or something else? Is it possible converting this model to tensorrt?

We have done pipeline Pytorch -> Caffe -> TensorRT on model RetinaNet and YoloV3, since TVM is much more friendly to us, and tvm can be converted on GTX 1070 gpu but run GTX 2080 as well, while ternsorrt can not. So we are not working on converting TensorRT anymore. But it is possible just need some efforts.

By the way, tensorrt version will be faster than tvm I thought.

I see, I prefer TVM as well