duangenquan/YoloV2NCS

Tiny Yolo output is different

nathiyaa opened this issue · 5 comments

Hi @duangenquan,

Im trying to replicate the steps that you have provided in the ReadMe file in this Repo.
I was successfully able to convert Caffe version of tiny-yolo Darknet model into NCS format.

Also i tried to run the demo on the stick, the output image generated for dog.jpg input image is different from the result you have mentioned in the ReadMe doc. The Bounding boxes are shifted little bit in their position.
I was not able to find why its different from your output. Could you please help me resolve this issue.
Thanks in Advance !

Following is the output that I get :
test_12

I Observe this shift in the Bounding Box results even for the custom YOLO model that I have trained.

Yes, the bboxes are shifted in yolo's model. You can try several things to solve this issue a little bit, such as adding pad in the last several layers to make sure the output grid is odd (such as 13x13), instead of even (such as 12x12).

For the issue of inconsistency results you observed, I think it is because that I updated the NMS, but forgot to update the result image. Thanks for your reminding!

I updated the python wrapper a little bit to tune bboxes according to letterbox_image in YOLOV2. The result is like this
yolo_dog
.

Thanks @duangenquan for the update.

such as adding pad in the last several layers to make sure the output grid is odd (such as 13x13), instead of even (such as 12x12).
Hi @duangenquan How to add pad in the last several layers?