fastmachinelearning/qonnx

Support `QConv2DBatchnorm` and `QDenseBatchnorm` layers

Opened this issue · 1 comments

Details

Currently, QConv2DBatchnorm and QDenseBatchnorm are not supported for conversion in #7

New behavior

Support QConv2DBatchnorm and QDenseBatchnorm layers for QKeras to QONNX conversion

Motivation

This would allow MLPerf Tiny models, which use these QKeras layers, to be converted from

Parts of QONNX being affected

Mainly (Q)Keras conversion files src/qonnx/converters/qkeras/qlayers.py and src/qonnx/converters/keras.py

@rushtheboy @nhanvtran @julesmuhizi

Hi, any updates on this?