Issues
- 0
Resize in FP16
#306 opened by wejoncy - 1
Vulnerability due to pinned protobuf package
#303 opened by famenzel - 1
- 1
Float mismatch error after float16 quantization :Data in initializer 'onnx::Add_2877' has element type tensor(float16) but usage of initializer in graph expects tensor(float)
#304 opened by majisama - 0
Integrate with ONNX 1.17.0 release branch
#302 opened by roborags - 0
- 1
fp16精度在/norm/ReduceMean_output_0节点返回nan
#299 opened by 13484835805 - 5
Max supported opset is very old
#296 opened by addisonklinke - 0
👋Version update ?
#298 opened by juntaosun - 0
ValueError: Validation Failed
#297 opened by RhinoInani - 0
max_finite_val default value
#295 opened by haowang5128 - 1
New release
#292 opened by gdippolito - 2
- 2
[Error] Load fp16
#287 opened by phamkhactu - 1
- 5
- 4
protobuf version
#265 opened by OKUA1 - 1
onnxconverter_common.auto_mixed_precision.auto_convert_mixed_precision never ends
#251 opened by FrancescoSaverioZuppichini - 6
convert_float_to_float16() produces a model that causes ValidationError with onnx.checker.check_model()
#256 opened by SergeySandler - 1
Converting model fp32 to fp16 with auto_mixed_precision_model_path from gets NaN
#249 opened by taoisu - 5
`auto_convert_mixed_precision` Error: two nodes with same node name error occurred during
#259 opened by KunMengcode - 2
FP16 conversion yields an unusable model
#266 opened by eddieevt-DXC - 1
Is there any upgrades on onnxconverter-common?
#270 opened by hanzigs - 4
resize op convert to FP16 fail
#272 opened by nistarlwc - 2
#Onnx Quantisation
#273 opened by vanditha18 - 1
Integrate with ONNX 1.16.0 release branch
#277 opened by cjvolzka - 1
Redundant dependencies in requirements.txt
#250 opened by jonathanunderwood - 1
Documentation
#264 opened by laclouis5 - 1
- 21
- 2
- 0
- 3
Failing tests
#242 opened by FRidh - 2
onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'Resize__139_input_cast_1' of node: name: Resize__139 OpType: Resize is not output of any previous nodes.
#261 opened by 1615070057 - 3
Issues when converting model to float16
#208 opened by toothache - 2
Inference issue after convert_float_to_float16
#200 opened by leqiao-1 - 1
1.12.2 version released?
#245 opened by philipwan - 0
Fp32-->fp16: original fp32 model works well with input data, but converted fp16 model failed with the same input data
#196 opened by yetingqiaqia - 0
Source tarball for v1.9.0 republished?
#204 opened by iarspider - 1
Add bounds warning to FP16 conversion script
#211 opened by kevinch-nv - 1
Error: onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'TopK_111_input_cast_0' of node:
#240 opened by seungtaek94 - 1
`Unsupported shape calculation for operator mlProgram` while using `onnxmltools.convert_coreml`
#241 opened by KOLANICH - 3
F16 file does not convert correctly
#207 opened by hardik124 - 1
Problem of converting model from 32-bit to 16-bit
#238 opened by MuxiRabbit - 1
StrictVersion is deprecated
#220 opened by xiaowuhu - 2
Security Development Lifecycle review for 2022-06
#213 opened by garymm - 1
"No space left on device" issue on auto_convert_mixed_precision_model_path()
#229 opened by yetingqiaqia - 1
add NOTICE file to onnxconverter-common
#227 opened by xiaowuhu - 3
verify onnx 1.12.0 rc
#218 opened by xiaowuhu - 2