code can only detect one person in one image?
aidarikako opened this issue · 5 comments
when I print(inputs.size(0)),which in 'test.py' line 83.
I find that output is 128.But test batch size is 128,it means all the 128 images in the val2017 only have one person respectively?I think maybe because the code just detect one person's keypoints in a picture?
And when I want to test my own picture ,I also find that I can only detect one person in a picture even though my picture have three people(because I don't have my own picture's gt_bbox,so I have to reashape my picture myself to fit the code)
So I want to know if the code can just detect one person's keypoints in one picture but don't support multi-persons?
Following the paper, this code is based on the top-down approach (detect person first then estimate each person's keypoints). If you want to use this code for multi-person images, you need to provide person bounding box then combine the results together.
Following the paper, this code is based on the top-down approach (detect person first then estimate each person's keypoints). If you want to use this code for multi-person images, you need to provide person bounding box then combine the results together.
Thank you for your work and reply
Seems the problem has been solved. Close this issue.
@GengDavid @aidarikako @mingloo
hello,why i got so large loss like:
Total params: 104.55MB
Epoch: 1 | LR: 0.00050000
iteration 100 | loss: 362.8368835449219, global loss: 246.98593711853027, refine loss: 115.85093688964844, avg loss: 403.03418150042546
any advice?tks
@my-hello-world hi i meet the same large loss with you. Can you tell me why this? thank you very much