NVIDIA-AI-IOT/deepstream_tao_apps

wrong output for a custom yolov4 model trained and tested within tlt

AhmedHisham1 opened this issue · 1 comments

I trained a resnet34_yolov4 model with tlt and tested the exported model inference via the yolov4 Jupiter notebook and it works fine. However, when testing with deepstream, the model output is completely wrong. The output is very small dimensions (x,y values of < 10 and w,h values of < 1 (always almost 0 or a very small number)) with wrong classes predictions. Is this caused by the custom batched NMS functions?

The issue has been solved.
The problem was that I used to copy this file https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/TRT-OSS/x86/TRT7.2/libnvinfer_plugin.so.7.2.2 to replace the "libnvinfer_plugin.so*" directly without building TRT-OSS again. I assumed that it would work because my version is also 7.2.2, and actually it did succeed in running the model but the output was all wrong and weird.

After I follow the steps from https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/object_detection/yolo_v4.html#tensorrt-oss-on-x86, now it works as expected.