rwightman/gen-efficientnet-pytorch

onnx error with different input size

alicera opened this issue · 4 comments

I export efficientnet_b0 onnx and set the 640x640 size , it will happen the error
'''
import onnx
import onnx_tensorrt.backend as backend
import numpy as np
model = onnx.load("efficientnet_b0.onnx")
engine = backend.prepare(model, device='CUDA:0')
input_data = np.random.random(size=(1, 3, 640, 640)).astype(np.float32)
output_data = engine.run(input_data)[0]
print(output_data)
print(output_data.shape)
'''

[Error]
[TensorRT] ERROR: Parameter check failed at: ../builder/Network.cpp::addPoolingNd::500, condition: allDimsGtEq(windowSize, 1) && volume(windowSize) < MAX_KERNEL_DIMS_PRODUCT
Traceback (most recent call last):
File "test_onnx.py", line 7, in
engine = backend.prepare(model, device='CUDA:0')
File "/opt/conda/lib/python3.6/site-packages/onnx_tensorrt-0.1.0-py3.6-linux-x86_64.egg/onnx_tensorrt/backend.py", line 218, in prepare
return TensorRTBackendRep(model, device, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/onnx_tensorrt-0.1.0-py3.6-linux-x86_64.egg/onnx_tensorrt/backend.py", line 94, in init
raise RuntimeError(msg)
RuntimeError: While parsing node number 8:
builtin_op_importers.cpp:1175 In function importGlobalAveragePool:
[8] Assertion failed: layer_ptr

I assume you exported to onnx with the same resolution set? If yes, I don't have an answer. The onnx ecosystem is a bit touchy, I've seen numerous breaks over the past few PyTorch and onnx version changes. TensorRT adds yet another variable. Good luck, please update if you find a solution to help others.

if I use the feature size (105 x 105) do your designed conv with stride 2 ,
the output size will be (53 x 53)

and the https://github.com/lukemelas/EfficientNet-PyTorch
do the same thing , the output size will be (52 x 52)
why? thanks

@alicera I'm not sure what your issue is? as far as I understand 53x53 is the correct output if the input is 105x105, stride 2, and padding is set to 'SAME' or 1, as it should be for the stride 2 convs in this network.

https://github.com/lukemelas/EfficientNet-PyTorch
if I set the input size (1,3,840,840)
and the layers output are
P0 torch.Size([1, 16, 420, 420])
P1 torch.Size([1, 24, 210, 210])
P2 torch.Size([1, 40, 105, 105])
P3 torch.Size([1, 80, 52, 52])
P4 torch.Size([1, 112, 26, 26])
P5 torch.Size([1, 192, 13, 13])
P6 torch.Size([1, 320, 6, 6])

and if I use the same input size (1,3,840,840)
the P3 output will be ([1, 80, 53, 53]) in your project