ModelTC/MQBench

why the scale and zero_point not update in Learnablefakequantize? I used different methods to quantize the yolov5

Closed this issue · 4 comments

` kwargs = {
'input_shape_dict': {'data': [1, 3, opt.imgsz, opt.imgsz]},
'output_path': output_dir,
'model_name': model_name,
'dummy_input': None,
'onnx_model_path': os.path.join(output_dir, '{}_ori.onnx'.format(model_name)),
}
module_tmp = copy.deepcopy(model)
module_tmp = module_tmp.cpu()
convert_onnx(module_tmp.eval(), **kwargs)
del module_tmp
model = model.train()
# exit(0)

backend = BackendType.Tensorrt
if opt.quantize:
    prepare_custom_config_dict= {
        'extra_qconfig_dict':{'w_fakequantize':'LearnableFakeQuantize'},
        'concrete_args':{'augment_1':False, 'profile_1':False, 'visualize_1':False}
    } 

    # print('named_modules:', dict(model.named_modules())[''])
    model.train()
    model = model.to(device)
    model = prepare_by_platform(model, backend, prepare_custom_config_dict)
    # print('prepared module:', model)
    enable_calibration(model)
    calibration_flag = True
    model = model.to(device)`

I didn't change outher code

QQ截图20230724121904
This is the onnx scale parameters about 1st(left) and 4th(right) for LearnableFakequantize ,they are same

Tracin commented

Check if scales are in your Optimizer.

thanks! now the parameters can upgrade

This issue has not received any updates in 120 days. Please reply to this issue if this still unresolved!