giddyyupp/coco-minitrain

Result table for keypoint detection

Opened this issue · 1 comments

Thank you for your excellent project!

I'm training the keypoint RCNN network implemented in torchvision by scaling the image to the size denoted in object detection result tables.

I know the size of input image between simplebaseline2D and networks of RCNN family(faster, mask, etc.) is different.

I'd like to know how to scale the input image size for minicoco dataset in the keypoint detection experiment using a simplebaseline2D network.

Despite using different networks, is the same size between the object detection task and the keypoint detection task?

Thanks in advance!

Hello, we trained pose estimation models using MMPose. So for each method you could check the corresponding config file to see what the image size is to train the models.

For SimpleBaseline2D, we resized the image to 256x192 pixels for training.