LogicTronix/Vitis-AI-Reference-Tutorials

A few small inconsistencies in the code and documentation for YolovV3 tutorial

Closed this issue · 2 comments

Quantizing-Compiling-YOLOv3-Pytorch-with-DPU-Inference/README.md
Step 2.:

  python gpu_inference.py --quant_mode calib
  python gpu_inference.py --quant_mode test --batch_size 1 --deploy

Should be:

  python quantize_yolov3.py --quant_mode calib
  python quantize_yolov3.py --quant_mode test --batch_size 1 --deploy

Quantizing-Compiling-YOLOv3-Pytorch-with-DPU-Inference/Quantization/quantize_result/README.md

python quantize_yolov3.py --deploy_mode calib
python quantize_yolov3.py --deploy_mode test --batch_size 1 --deploy

Should be:

python quantize_yolov3.py --quant_mode calib
python quantize_yolov3.py  --quant_mode test --batch_size 1 --deploy

Quantizing-Compiling-YOLOv3-Pytorch-with-DPU-Inference/Quantized inference/quantized_inference.py line 14:

model = torch.jit.load('../Quantization/quantized_result/ModelMain_int.pt', map_location=torch.device('cpu'))

By renaming quantized_result to quantize_result, the location of the model must also be adjusted

model = torch.jit.load('../Quantization/quantize_result/ModelMain_int.pt', map_location=torch.device('cpu'))

I would rather not push it directly into the ticket: thank you! The tutorial helps me a lot right now. Thanks

Thank you for taking the time to review the tutorial and provide feedback. I'm glad to hear that you're finding it helpful. Your insights are invaluable to us, and we'll definitely take them into account to improve the quality of our tutorial. If you have more suggestions or find any other issues, please don't hesitate to let us know!