onnx/tensorflow-onnx

tf.image.resize can't convert to FP16 model

nistarlwc opened this issue · 1 comments

Describe the bug

I have a model of segmentation, Bisenet-V2.
Need to convert to FP16 model, first convert a FP32 model successfully, and predict successfully too.
Then use float16_converter of onnxmltools to convert a FP16 model.
But when run the prediction, there is a error:
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from model_fp16.onnx failed:Node (Resize__846) Op (Resize) [ShapeInferenceError] Either sizes or scales must be provided, but not both of them

Is the problem about function tf.image.resize_bilinear?
How to solve it ?

Try to find some issue with same problem,
FP16 conversion yields an unusable model,
FP16 conversion yields an unusable model
support sizes for Resize op

Urgency

Urgent

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04*): Windows 10
  • TensorFlow Version: tensorflow 2.3
  • Python version: python 3.8
  • ONNX version (if applicable, e.g. 1.11*): onnx 1.12.0
  • ONNXRuntime version (if applicable, e.g. 1.11*): onnxruntime-gpu 1.15.1

To Reproduce

Screenshots

Additional context

I think the attributes of Resize op was reset during converting to a FP16 model, @xiaowuhu , do you have any thoughts?