mAP decreased when run on Jetson using INT8 model
nghoaithuong opened this issue · 3 comments
nghoaithuong commented
When I tried to run INT8 model on Jetson, mAP drop ~10%. Model INT8 can't detect small object when FP16 model was normal.
I used newest docker TensorRT nvcr.io/nvidia/l4t-tensorrt:r8.4.1.5-devel
Tyler-D commented
Did you run with TensorRT sample ? What's the model you used ? And did you try int8 on dGPU ?
nghoaithuong commented
Yes, I run TRT sample with my custom model using int8 on GPU. This issue only occurred when I ran on jetson, it's normal when ran on PC with above docker.
Tyler-D commented
OK. Then it is potentially a L4T TensorRT issue. So my suggestion is that you can try to pick some layers and set the precision to be fp16.