Qjizhi/TensorRT-CenterNet-3D

why does the results of inference contain so huge numbers of detected objs?

Opened this issue · 3 comments

first of all, thanks for your great jobs of tensorrt 3D detection! i 've already modified this program deployed it as tensorrt8.2 version, i successfully built the model engine based on the version RT8.2, however, i got so many inference results, 3018 num_dets merely doInference of one pic of 000292.png, it is so weired, i 've checked the 3D model of onnx formats, the output of the model is "(hm,3) (dep,1)" which is identical to the instructions of your repo, i have no idea why result in this unexpected results, if you have some suggestions, please advice me, thanks in advance, i will appreciate it so much.

the inference results displayed like the following pic:

first of all, thanks for your great jobs of tensorrt 3D detection! i 've already modified this program deployed it as tensorrt8.2 version, i successfully built the model engine based on the version RT8.2, however, i got so many inference results, 3018 num_dets merely doInference of one pic of 000292.png, it is so weired, i 've checked the 3D model of onnx formats, the output of the model is "(hm,3) (dep,1)" which is identical to the instructions of your repo, i have no idea why result in this unexpected results, if you have some suggestions, please advice me, thanks in advance, i will appreciate it so much.

the inference results displayed like the following pic:

can you share how to upgrade TensorRT in this project from 5.x to 8.x ?