JierunChen/FasterNet

Export to ONNX

Opened this issue · 2 comments

Hi, nice idea about Pconv followed by channel mixing, is there any idea how to export to ONNX with inplace slicing forward. I found that the exported onnx file hundreds times bigger after exporting. And the inference time not ideal for onnx or TensorRT.

+1 on that. @JierunChen could you provide onnx export script, please?

+1 on that. @JierunChen could you provide onnx export script, please?

Have you solved the problem of slow speed when converting from fasternet to tensorrt?