ONNC/onnc

Support 8bit quantization and half-precision floating point representation

Opened this issue · 0 comments

What to add:

For instance WinMLTools enables the optimization of ONNX models by either

  • converting floating point 32 into a floating point 16 representation (IEEE 754 half), effectively compressing the model by reducing its size in half.
  • the compression of models represented in floating point 32 into 8-bit integer representations - which yields a disk footprint reduction of up to 75% depending on the model.

Currently, neither are supported by ONNC.

Why it is necessary:

As ONNC is perfectly suited to generate natively executable models targeting MCU's with limited memory constraints, it would be very useful if ONNC supported either one or both methods of model optimization.

How to achieve it:

Supporting floating point 16 representation for inputs and operators, and/or supporting 8-bit integer representations for operators.