dumyy/handpose

Does this method requires detect hands get centers first when prediction?

Closed this issue · 4 comments

Shall we need detection centers when prediction?

dumyy commented

The output of the model is a normalized representation, so you need a center to transforn it to the world coordinate.

@dumyy What do you mean need a center to transform to world? Does it need a center as network input?

I just wonder the network can using a single image input and output can be draw on image?

dumyy commented

the input of network is only a single depth image, output is normalized 3d coordinates. So you need transform them into real world coordinates by post-processing. If you want to draw the output on image, just map the 3d pose to 2d image by camera info.
ALL THOSE CAN BE FOUND IN THE TEST FILE.

thank u so much.... so this method still need depth.... I am searching for a method simply using RGB image to do so.