RizhaoCai/PyTorch_ONNX_TensorRT

error: Failed to parse ONNX model.

luluvi opened this issue · 3 comments

Hello,thank you for your work. I get a error when run the demo,but i just use the model_128.onnx and did not make any
changes.
What is the reason and how to solve it ?

error:
Please check if the ONNX model is compatible '
AssertionError: Failed to parse ONNX model.

Hello,thank you for your work. I get a error when run the demo,but i just use the model_128.onnx and did not make any
changes.
What is the reason and how to solve it ?

error:
Please check if the ONNX model is compatible '
AssertionError: Failed to parse ONNX model.

Hello @luluvi

The immediate cause of the error often results from that the ONNX exporter of PyTorch and the ONNX parser of TensorRT are not always 100% compatible, which actually annoys many people who used TensorRT and PyTorch.

For example, the ONNX parser of TensorRT < 6 does not support ONNX models exported by PyTorch>=1.3

You can do two things

  1. Please provide your versions information such that I can help you figure out the reason.
  2. Use the command line "trtexec --onnx=${onnx_file_name} --explicitBatch" to see the output.

Thank you for your reply. I already solved this problem  by set the explicitBatch. I have two questions:  1. I test the fp32 and fp 16, their tensorrt-engine files size are different , but inference time almost the same, why ?  2.I want to modify the code to batch test a large number of images. How can i do it?   Can you give me some advices? Thank you very much.Looking forward to your reply。  

------------------ 原始邮件 ------------------ 发件人: "RizhaoCai/PyTorch_ONNX_TensorRT" <notifications@github.com>; 发送时间: 2020年9月24日(星期四) 下午5:13 收件人: "RizhaoCai/PyTorch_ONNX_TensorRT"<PyTorch_ONNX_TensorRT@noreply.github.com>; 抄送: "1203826839"<1203826839@qq.com>;"Mention"<mention@noreply.github.com>; 主题: Re: [RizhaoCai/PyTorch_ONNX_TensorRT] error: Failed to parse ONNX model. (#6) Hello,thank you for your work. I get a error when run the demo,but i just use the model_128.onnx and did not make any changes. What is the reason and how to solve it ? error: Please check if the ONNX model is compatible ' AssertionError: Failed to parse ONNX model. Hello @luluvi The immediate cause of the error often results from that the ONNX exporter of PyTorch and the ONNX parser of TensorRT are not always 100% compatible, which actually annoys many people who used TensorRT and PyTorch. For example, the ONNX parser of TensorRT < 6 does not support ONNX models exported by PyTorch>=1.3 You can do two things Please provide your versions information such that I can help you figure out the reason. Use the command line "trtexec --onnx=${onnx_file_name} --explicitBatch" to see the output. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

  1. What GPU are you using? The TensorRT FP16 mode now is supported in limited GPU modules, such as Tesla P100, Jetson TX1/TX2, Tesla T4, etc. You can check the from the NVIDIA. Those GPUs, such as 1080Ti, do not support FP16 and int8. Although running FP optimization could produce an FP16 engine, but the speed improvement may not be interesting due to the hardware limitation.
  2. To run the code with the batch size>1, you can
    a) When generating an engine, set the max_batch_size as you want
    b) Produce a batch of data in the way do with PyTorch, where the data shape is assumed to be [B, C, H, W].
    c) Transform the data to numpy format and flatten it ( shape: [BCHW])
    d) Do the inference as shown in the code and get the output (shape: [B
    C_outH_outW_out])
    e) Reshape the output to [B, C_out, H_out, W_out], which you must know in advance.
    f) Do other post-processings