facebookresearch/ConvNeXt-V2

Onnx Export

sulaimanvesal opened this issue · 1 comments

Hi all,

I just used ConvNeXt-V2 (Nano) model as the backbone for regression task. The new model works well, and I wanted to convert it to Onnx. The conversion also worked well, but when I am testing the Onnx exported file using OnnxRunTime, it generates the follow error:

============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
# Model has been converted to ONNX

File "export_onnex.py", line 183, in evaluate_on_test_set_after_onnx
    ort_outs = ort_session.run(None, ort_inputs)
  File "/home/conda/envs/export_env/lib/python3.8/site-packages/onnxruntime/capi/session.py", line 111, in run
    return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running BiasGelu node. Name:'BiasGelu' Status Message: Input 1 is expected to have 1 dimensions, got 4

I would appreciate for any help and why I get this error.

@sulaimanvesal
The error message suggests that there is a mismatch in the expected dimensions for the BiasGelu node during the inference run. Specifically, the input tensor for this node is expected to have only one dimension, but it is receiving a tensor with four dimensions. This mismatch often occurs due to differences in tensor shapes between the PyTorch model and the ONNX representation.

Please try with the following approach to fix your issue.

  • Check Tensor Shapes: Ensure that the input tensors provided to the ONNX model have the correct shapes. You can print the shapes of the tensors in your PyTorch model before exporting to ONNX and compare them with the shapes expected by the ONNX model.

  • Simplify the ONNX Model: Use the onnx-simplifier to simplify the ONNX model, which can sometimes resolve issues related to complex node operations. Install the simplifier with pip install onnx-simplifier and run it on your model:

python3 -m onnxsim your_model.onnx your_model_simplified.onnx

  • Update ONNXRuntime: Ensure that you are using the latest version of ONNXRuntime, as bugs and issues are often fixed in newer releases.

  • Check Node Specifications: Verify the specifications of the BiasGelu node in your ONNX model. You can inspect the model using Netron or by parsing the ONNX model programmatically:

import onnx

model = onnx.load("your_model.onnx")
onnx.checker.check_model(model)
print(onnx.helper.printable_graph(model.graph))
  • Custom Operators: If BiasGelu is a custom operator or not standard in ONNX, you might need to handle its conversion explicitly or use a workaround.

  • Preprocessing: Ensure that any preprocessing steps applied during inference are consistent with those used during training and ONNX export.

I've attached the conceptional approach to check the tensor shapes before exporting to ONNX:

import torch
import onnx

# Your model definition
model = ...  # Load or define your ConvNeXt-V2 model
model.eval()

# Dummy input tensor with the same shape as your actual input
dummy_input = torch.randn(1, 3, 224, 224)

# Print the shape of the output tensor before exporting to ONNX
with torch.no_grad():
    output = model(dummy_input)
    print(f"Output shape before ONNX export: {output.shape}")

# Export the model to ONNX
torch.onnx.export(
    model,
    dummy_input,
    "model.onnx",
    export_params=True,
    opset_version=12,
    do_constant_folding=True,
    input_names=['input'],
    output_names=['output'],
    dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}}
)

# Load and check the ONNX model
onnx_model = onnx.load("model.onnx")
onnx.checker.check_model(onnx_model)

Best luck !