ultralytics/yolov5

The accuracy of the .pt model will decrease after being converted to .engine model.

arkerman opened this issue · 5 comments

Search before asking

  • I have searched the YOLOv5 issues and found no similar bug report.

YOLOv5 Component

Detection

Bug

The results obtained by my inference using the .pt model and the .engine model are different.

Environment

No response

Minimal Reproducible Example

No response

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!

@arkerman hello!

It's quite common to observe slight discrepancies in model performance when converting from a .pt file to a .engine file due to difference in optimization and precision handling between the two formats. To minimize such discrepancies, ensure that the precision during conversion matches (e.g., FP32 in both cases) and that all optimization settings are similar.

If the performance difference is significant and these adjustments don't help, consider reviewing the conversion logs for any warnings or error messages that could indicate what might be going wrong during the process.

Happy coding! 😊

A warning will be reported during the conversion process:
"Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32."
So how should I convert the model to ensure accurate alignment?

Hi @arkerman!

The warning you're seeing indicates a mismatch in data types during the conversion process from ONNX to TensorRT, where TensorRT does not support INT64 weights. To help ensure better precision and compatibility, you could manually cast the weights from INT64 to FP32 before converting to TensorRT. This is generally more aligned with TensorRT's capabilities than INT32 and helps minimize potential loss of information. Here's a quick code snippet to adjust the data types in the ONNX model:

import onnx
from onnx import numpy_helper

# Load your ONNX model
model = onnx.load('model.onnx')

# Iterate through each initializer (weight)
for initializer in model.graph.initializer:
    data = numpy_helper.to_array(initializer)
    if data.dtype == np.int64:
        # Cast INT64 to FP32
        data = data.astype(np.float32)
        # Replace the initializer with the new data
        new_initializer = numpy_helper.from_array(data, initializer.name)
        model.graph.initializer.remove(initializer)
        model.graph.initializer.append(new_initializer)

# Save the modified model
onnx.save(model, 'modified_model.onnx')

This snippet converts INT64 weights to FP32, which might help with your conversion process! 😊 Happy coding!

@glenn-jocher Thanks for your help!
But it seems that the code snipped is not work.
It raised an error : "onnxruntime.capi.onnxruntime_pybind11_state.InvalidGraph: [ONNXRuntimeError] : 10 : INVALID_GRAPH : Load model from modified_yolov5s.onnx failed:This is an invalid model. Type Error: Type 'tensor(float)' of input parameter (onnx::Reshape_468) of operator (Reshape) in node (Reshape_237) is invalid."