microsoft/onnxconverter-common

Float mismatch error after float16 quantization :Data in initializer 'onnx::Add_2877' has element type tensor(float16) but usage of initializer in graph expects tensor(float)

Opened this issue · 1 comments

Float mismatch error after float16 quantization :Data in initializer 'onnx::Add_2877' has element type tensor(float16) but usage of initializer in graph expects tensor(float)

model_fp16 = float16.convert_float_to_float16(model,keep_io_types=True,disable_shape_infer =True)
Turning on keep_io_types=True reports floating point mismatches

I want to cry, but I can't.