ModelTC/MQBench

关于使用ONNX-QNN在生成Deploy模型出现的问题

Closed this issue · 2 comments

您好,非常感谢您的出色工作。我是MQBench的初学者,在使用您mqbench的QNN方案对vgg19模型进行量化时,我发现当我使用以下config的时候,生成的onnx模型无法进行下一步的模型转换,也就是去除伪量化块,生成Deploy模型。请问这样的问题该如何解决?

            extra_qconfig_dict = {
                'w_observer': 'ClipStdObserver',
                'a_observer': 'ClipStdObserver',
                'w_fakequantize': 'DSQFakeQuantize',
                'a_fakequantize': 'DSQFakeQuantize',
                'w_qscheme': {
                    'bit': 8,
                    'symmetry': True,
                    'per_channel': False,
                    'pot_scale': True
                },
                'a_qscheme': {
                    'bit': 8,
                    'symmetry': True,
                    'per_channel': False,
                    'pot_scale': True
                }
            }
            prepare_custom_config_dict = {
                'extra_qconfig_dict': extra_qconfig_dict
            }
           self.model = prepare_by_platform(self.model, BackendType.ONNX_QNN, prepare_custom_config_dict)

报错信息如下

  File "openpose_mqb.py", line 411, in train
    convert_deploy(self.model, BackendType.ONNX_QNN, input_shape, model_name = 'model_QNN')
  File "MQBench-0.0.6-py3.9.egg/mqbench/convert_deploy.py", line 184, in convert_deploy
    convert_function(deploy_model, **kwargs)
  File "MQBench-0.0.6-py3.9.egg/mqbench/convert_deploy.py", line 138, in deploy_qparams_tvm
    ONNXQNNPass(onnx_model_path).run(model_name)
  File "MQBench-0.0.6-py3.9.egg/mqbench/deploy/deploy_onnx_qnn.py", line 273, in run
    self.format_qlinear_dtype_pass()
  File "MQBench-0.0.6-py3.9.egg/mqbench/deploy/deploy_onnx_qnn.py", line 258, in format_qlinear_dtype_pass
    scale, zero_point, qmin, qmax = node.input[1], node.input[2], node.input[3], node.input[4]
IndexError: list index (3) out of range

同时我也想向您请教,如果使用Academic方案,该如何实现支持TVM编译的depoly model的转换?

This issue has not received any updates in 120 days. Please reply to this issue if this still unresolved!