OpenGVLab/InternImage

cannot use Python API to infer after converting the segmentation model to tensorrt

chenzhutian opened this issue · 0 comments

After converting to tensorrt format, I try to do inference using the mmdeploy's python API.
However, it shows


[2024-07-03 01:07:52.341] [mmdeploy] [info] [model.cpp:35] [DirectoryModel] Load model: "./work_dirs/mmseg/upernet_internimage_t_512_160k_ade20k"
[2024-07-03 01:07:52.639] [mmdeploy] [error] [compose.cpp:37] Unable to find Transform creator: ResizeToMultiple. Available transforms: [("CenterCrop", 0), ("Collect", 0), ("Compose", 0), ("DefaultFormatBundle", 0), ("FormatShape", 0), ("ImageToTensor", 0), ("Lift", 0), ("LoadImageFromFile", 0), ("Normalize", 0), ("Pad", 0), ("Resize", 0), ("ResizeOCR", 0), ("TenCrop", 0), ("ThreeCrop", 0), ("TopDownAffine", 0), ("TopDownGetBboxCenterScale", 0)]
[2024-07-03 01:07:52.640] [mmdeploy] [error] [task.cpp:99] error parsing config: {
  "context": {
    "device": "<any>",
    "model": "<any>",
    "stream": "<any>"
  },
  "input": [
    "img"
  ],
  "module": "Transform",
  "name": "Preprocess",
  "output": [
    "prep_output"
  ],
  "transforms": [
    {
      "type": "LoadImageFromFile"
    },
    {
      "keep_ratio": false,
      "size": [
        512,
        512
      ],
      "type": "Resize"
    },
    {
      "size_divisor": 32,
      "type": "ResizeToMultiple"
    },
    {
      "mean": [
        123.675,
        116.28,
        103.53
      ],
      "std": [
        58.395,
        57.12,
        57.375
      ],
      "to_rgb": true,
      "type": "Normalize"
    },
    {
      "keys": [
        "img"
      ],
      "type": "ImageToTensor"
    },
    {
      "keys": [
        "img"
      ],
      "meta_keys": [
        "filename",
        "ori_filename",
        "flip_direction",
        "valid_ratio",
        "scale_factor",
        "flip",
        "img_norm_cfg",
        "ori_shape",
        "img_shape",
        "pad_shape"
      ],
      "type": "Collect"
    }
  ],
  "type": "Task"
}
[2024-07-03 01:07:52.640] [mmdeploy] [error] [net_module.cpp:47] Net backend not found: tensorrt, available backends: []
[2024-07-03 01:07:52.640] [mmdeploy] [error] [task.cpp:99] error parsing config: {
  "context": {
    "device": "<any>",
    "model": "<any>",
    "stream": "<any>"
  },
  "input": [
    "prep_output"
  ],
  "input_map": {
    "img": "input"
  },
  "is_batched": false,
  "module": "Net",
  "name": "uper",
  "output": [
    "infer_output"
  ],
  "output_map": {},
  "type": "Task"
}
Traceback (most recent call last):
  File "run_deploy.py", line 54, in <module>
    main()
  File "run_deploy.py", line 37, in main
    seg = segmentor(img)
TypeError: __call__(): incompatible function arguments. The following argument types are supported:
    1. (self: mmdeploy_runtime.mmdeploy_runtime.Segmentor, arg0: numpy.ndarray[numpy.uint8]) -> numpy.ndarray

Invoked with: <mmdeploy_runtime.mmdeploy_runtime.Segmentor object at 0x7f5c199330b0>, None

It looks like it is because mmdeploy's runtime doesn't support ResizeToMultiple.
Can you please share the inference code with the public?
Thanks!