Xilinx/Vitis-AI

About how the ONNX model is quantified?

gyy-bot opened this issue · 0 comments

I use yolovx_nano.onnx(from pytorch2onnx) to convert into quantized model yolovx_nano.quantQDQ.onnx, it runs on the board, the model cannot be compiled, I found that my quantized model is different from the structure of yolox_nano_onnx_pt.onnx provided by the official, Please provide the code for the onnx model quantization, I used the following code:
vai_q_onnx.quantize_static(model_input='yolovx_nanao.onnx',
model_output='yolovx_nano.quantQDQ.onnx',
calibration_data_reader=dr,
quant_format=QuantFormat.QDQ)
looking forward to your reply...