Engineering-Course/LIP_JPPNet

weird results runing evaluate_parsing_JPPNet-s2.py

ChenDRAG opened this issue · 3 comments

I ran the script below using the model provided
evaluate_parsing_JPPNet-s2.py

and get lots of results like this (nearly happen all the time if the person don't face directly at the camera).
Do you have any idea why this happens?

41_vis
41

10_vis
53_vis

I use picture of size 1080*1080 does picture size affect your results? or you resize the picture automatically so it doesn't matter?

The input size doesn't matter. I think the method fails for this kind of images when the person doesn't face directly at the camera.
Maybe you can try the more robust model PGN

Did you have to make any changes in the code for the model to work like this? I downloaded the weights and stuff in hopes of it working, but my results are not good - #47