grimoire/mmdetection-to-tensorrt

Setting opt_shape_param min_shape==opt_shape==max_shape does not work correctly.

tehkillerbee opened this issue · 2 comments

Hello, I have been trying to set min_shape==opt_shape==max_shape as this is a requirement when using Jetson DLA cores.
However, when setting the three shape types to be identical, I get an error indicating that the wrong supplied dimension is used - irrespective of the profile dimension. First I tried setting it to a fixed value, both identical to my PyTorch model (768x768) and slightly larger (800x800)

opt_shape_param = [
    [
        [1, 3, 800, 800],  # min shape
        [1, 3, 800, 800],  # optimize shape
        [1, 3, 800, 800],  # max shape
    ]
]

This results in an error:
[12/17/2021-14:00:28] [TRT] [E] 3: [executionContext.cpp::setBindingDimensions::945] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::setBindingDimensions::945, condition: profileMinDims.d[i] <= dimensions.d[i]. Supplied binding dimension [1,3,576,768] for bindings[0] exceed min ~ max range at index 2, maximum dimension in profile is 800, minimum dimension in profile is 800, but supplied dimension is 576.
)

Changing the opt_shape_param as listed below solves the issue - and there are no errors anymore.

opt_shape_param = [
    [
        [1, 3, 576, 768],  # min shape
        [1, 3, 576, 768],  # optimize shape
        [1, 3, 576, 768],  # max shape
    ]
]

My PyTorch model should be using an input size of 768x768 - not 768x576 so I do not understand what's going wrong. Do you have some pointers on how to set this opt_shape_param correctly, if it is not enough to set it identically to the size used by PyTorch?

opt_shape_param is used to mark the shape of input tensor. Most models in mmdetection would reshape keep ratio. That means the input tensor shape might different from the shape you set in config.

I see, that explains why a resolution of 768x576 is used, even thought the model was trained with 768x768. I will close this issue, since it is not a bug.