quic/aimet-model-zoo

Vit evaluators demo QAT and export to onnx model,but onnx model to snpe-dlc error

Opened this issue · 3 comments

I used aimet-model-zoo vit demo(vit_quanteval.py),and export the quanted model with json to onnx;And I use snpe-onnx-to-dlc(v2.10),but errors

image

aiwhz commented

@xiejunyi-joey
Have you reproduced the accuracy results with the scripts vit_quanteval.py?
My test results show the quantized model accuracy is not good on imagenet.

2023-07-18 15:01:42,720 - main - INFO - Original model performances
2023-07-18 15:01:42,720 - main - INFO - ===========================
2023-07-18 15:01:42,720 - main - INFO - Original Model | 32-bit Environment | perplexity : 0.8132
2023-07-18 15:01:42,720 - main - INFO - Original Model | 8-bit Environment | perplexity: 0.0016
2023-07-18 15:01:42,720 - main - INFO - Optimized model performances
2023-07-18 15:01:42,721 - main - INFO - ===========================
2023-07-18 15:01:42,721 - main - INFO - Optimized Model | 32-bit Environment | perplexity: 0.8082
2023-07-18 15:01:42,721 - main - INFO - Optimized Model | 8-bit Environment | perplexity: 0.0020

the original 8bit result is not good
image

aiwhz commented

@xiejunyi-joey I also met the converter issue, hope the later SNPE can fix it.

Regarding the accuracy issue.

  1. The Origin model int8 is not QAT, so it is not good.
  2. After changing some code, I can get the correct results for the Optimized Model INT8. #38