fastmachinelearning/qonnx

QKeras qconv2d test failure

Opened this issue · 0 comments

Prerequisites

Please make sure to check off these prerequisites before submitting a bug report.

  • Test that the bug appears on the current version of the main branch. Make sure to include the commit hash of the commit you checked out.
  • Check that the issue hasn't already been reported, by checking the currently open issues.
  • If there are steps to reproduce the problem, make sure to write them down below.
  • If relevant, please include the ONNX files, which were created directly before and/or after the bug.

Quick summary

Please give a brief and concise description of the bug.

Using commit hash from latest main (d7afcbd) and a particular random seed for pytest-randomly, one of the QKeras conversion tests (test_qkeras_qconv2d_1[11]) fails with somewhat large deviations from the expected value.

Details

Please add to the following sections to describe the bug as accurately as possible.

Steps to Reproduce

Add what needs to be done to reproduce the bug. Add code examples where useful
and make sure to include the resulting ONNX files, and the commit hash you are working on.

  1. Clone the qonnx repository
  2. Checkout main branch (tested with hash d7afcbd)
  3. Execute pytest -k test_qkeras_qconv2d_1[11] --randomly-seed=719809827

Expected behavior

Test should pass successfully (converted QKeras->QONNX model should produce the expected value)

Actual behavior

A different output is produced:

quantizers = (<qkeras.quantizers.ternary object at 0x7f0cf9ba7fa0>, <qkeras.quantizers.quantized_bits object at 0x7f0cf9ba7fd0>)
request = <FixtureRequest for <Function test_qkeras_qconv2d_1[11]>>

    @pytest.mark.parametrize("quantizers", kb_quantizers, ids=kb_quantizers_ids)
    def test_qkeras_qconv2d_1(quantizers, request):
        kq, bq = quantizers
        k_ini = tf.keras.initializers.RandomUniform(minval=kq.min(), maxval=kq.max())
        b_ini = tf.keras.initializers.RandomUniform(minval=bq.min(), maxval=bq.max())
        x = x_in = Input((28, 28, 3), name="input")
        x = QConv2D(
            32,
            (2, 2),
            strides=(2, 2),
            kernel_quantizer=kq,
            bias_quantizer=bq,
            activation=quantized_bits(4, 4, 1, alpha=1.0),
            kernel_initializer=k_ini,
            bias_initializer=b_ini,
            name="conv2d_0",
        )(x)
        x = QActivation("quantized_relu(6,2)", name="act1")(x)
        x = QConv2D(
            64,
            (3, 3),
            strides=(2, 2),
            kernel_quantizer=kq,
            bias_quantizer=bq,
            use_bias=False,
            kernel_initializer=k_ini,
            bias_initializer=b_ini,
            name="conv2d_1",
        )(x)
        model = Model(inputs=[x_in], outputs=[x])
    
        x_test = np.random.uniform(low=-1.0, high=1.0, size=(1, 28, 28, 3)).astype(dtype=np.float32)
        y_qkeras = model.predict(x_test)
    
        onnx_model, external_storage = from_keras(model, "test_qkeras_conversion", opset=9)
        assert external_storage is None
        model_path = f"model_test_qkeras_qconv2d1_{request.node.callspec.id}.onnx"
        onnx.save(onnx_model, model_path)
    
        onnx_model = ModelWrapper(model_path)
        onnx_model = onnx_model.transform(InferShapes())
    
        idict = {onnx_model.graph.input[0].name: x_test}
        odict = oxe.execute_onnx(onnx_model, idict, True)
        y_qonnx = odict[onnx_model.graph.output[0].name]
    
>       np.testing.assert_allclose(y_qkeras, y_qonnx, rtol=1e-4, atol=1e-4)

tests/keras/test_keras_convert.py:373: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

args = (<function assert_allclose.<locals>.compare at 0x7f0cec725550>, array([[[[15.8125, 31.5625, 31.5   , ..., 13.875 ,  0....875 , -2.    , 27.5   ],
         [15.8125, 27.5625, 27.5   , ..., 11.875 , -6.    , 33.5   ]]]],
      dtype=float32))
kwds = {'equal_nan': True, 'err_msg': '', 'header': 'Not equal to tolerance rtol=0.0001, atol=0.0001', 'verbose': True}

    @wraps(func)
    def inner(*args, **kwds):
        with self._recreate_cm():
>           return func(*args, **kwds)
E           AssertionError: 
E           Not equal to tolerance rtol=0.0001, atol=0.0001
E           
E           Mismatched elements: 36 / 2304 (1.56%)
E           Max absolute difference: 3.9375
E           Max relative difference: 63.
E            x: array([[[[15.8125, 31.5625, 31.5   , ..., 13.875 ,  0.    , 33.5   ],
E                    [15.75  , 25.625 , 33.5625, ...,  5.9375,  3.875 , 27.5625],
E                    [13.8125, 25.625 , 27.5625, ...,  5.875 ,  1.9375, 27.5625],...
E            y: array([[[[15.8125, 31.5625, 31.5   , ..., 13.875 ,  0.    , 33.5   ],
E                    [15.75  , 25.625 , 33.5625, ...,  5.9375,  3.875 , 27.5625],
E                    [13.8125, 25.625 , 27.5625, ...,  5.875 ,  1.9375, 27.5625],...

/usr/lib/python3.8/contextlib.py:75: AssertionError

@selwyn96 @jmduarte could you take a look at this please? I am flagging it because the max abs/rel differences look rather large, but I also suspect this could be an off-by-1 error during quantization magnified by a scale factor.