OAID/Tengine

Segmentation fault while forward inference of tmfile with the model converted using convert tool.

Pratiquea opened this issue · 0 comments

I am trying to convert the yolox_nano model to deploy on Khadas VIM3. I was able to perform forward inference of the yolox_nano model already converted by @BUG1989 on khadas VIM3. However, when I try to replicate the conversion process starting from the PyTorch yolox_nano.pth checkpoint to the tmfile and try to run the model on my pc, I get a seg fault.

Here's the pipeline that I am following:
Pytorch --> ONNX --> tmfile --> uint8 timfile.
I was able to use the onnx model convert script provided by the authors to convert PyTorch model (checkpoint provided by the authors) to the ONNX model and verify that the ONNX model works well for forward inference. Next, I was also able to convert the ONNX model to tmfile by leveraging the Tengine convert tool ( v1.0). However, when I try forward inference on this tmfile as mentioned here, I get the following error:

./build/examples/tm_yolox  -m ../neural_networks_docker/YOLOX/yolox_nano.tmfile -i ../neural_networks_docker/YOLOX/assets/dog.jpg -r 1 -t 1
tengine-lite library version: 1.5-dev
munmap_chunk(): invalid pointer
Aborted (core dumped)

Since the forward inference of the tmfile mode is failing at this point, proceeding to post-training quantization followed by deployment of the resulting model on Khadas VIM3, naturally fails.

Details of packages used:

onnx                        v1.8.1
onnx-simplifier         v0.3.5
onnxoptimizer          v0.3.1
python                      v3.6
onnx opset               v12
convert_tool             v1.0

@BUG1989 can you please provide me the pipeline that you used to convert the model?