Would you support conversion from torch to onnx and ncnn?
stereomatchingkiss opened this issue · 7 comments
As the title mentioned, one of the strength of yolov9 is the relatively high accuracy with smaller size and faster speed, a great tool for embedded devices. I think it would be a nice feature for this project
@henrytsui000 I would like to contribute in this one. Is there any work done on exporting to different formats like onnx, coreml and tflite?
@henrytsui000 I would like to contribute in this one. Is there any work done on exporting to different formats like onnx, coreml and tflite?
Thanks a lot!
You may find some existing code here:
YOLO/yolo/utils/deploy_utils.py
Line 11 in dc88787
To be honest, I'm not sure if the code is robust enough, but you can activate it using the following command:
python yolo/lazy.py task=inference task.fast_inference=onnx
Currently, it only supports ONNX and TensorRT.
If you're willing, you can add support for CoreML and TFLite, and help make the code more robust.
Best regards,
Henry Tsui
Thanks for your reply. Gonna check and try to contribute on this next weeks!
@henrytsui000
I think we want to remove the auxiliary branch on all export formats, right?
'''
if self.compiler == "onnx":
return self._load_onnx_model(device)
elif self.compiler == "trt":
return self._load_trt_model().to(device)
elif self.compiler == "deploy":
self.cfg.model.model.auxiliary = {}
'''
Yes, the auxiliary header is only used to train the model