Vit evaluators demo QAT and export to onnx model,but onnx model to snpe-dlc error
Opened this issue · 3 comments
@xiejunyi-joey
Have you reproduced the accuracy results with the scripts vit_quanteval.py?
My test results show the quantized model accuracy is not good on imagenet.
2023-07-18 15:01:42,720 - main - INFO - Original model performances
2023-07-18 15:01:42,720 - main - INFO - ===========================
2023-07-18 15:01:42,720 - main - INFO - Original Model | 32-bit Environment | perplexity : 0.8132
2023-07-18 15:01:42,720 - main - INFO - Original Model | 8-bit Environment | perplexity: 0.0016
2023-07-18 15:01:42,720 - main - INFO - Optimized model performances
2023-07-18 15:01:42,721 - main - INFO - ===========================
2023-07-18 15:01:42,721 - main - INFO - Optimized Model | 32-bit Environment | perplexity: 0.8082
2023-07-18 15:01:42,721 - main - INFO - Optimized Model | 8-bit Environment | perplexity: 0.0020
@xiejunyi-joey I also met the converter issue, hope the later SNPE can fix it.
Regarding the accuracy issue.
- The Origin model int8 is not QAT, so it is not good.
- After changing some code, I can get the correct results for the Optimized Model INT8. #38