Will you release the code of demo, whose input is 3D human pose?
sunbin1357 opened this issue · 3 comments
Will you release the code of demo, whose input is 3D human pose?
I don't have plans to clean up and share the code any time soon. I can, however, guide you to implement it yourself. The inputs/outputs are the same. It's just a matter of doing the skeleton mapping I describe in the paper and building the code to feed the the pose estimates to our method. Shoot me an email, and I can help making the script.
When retargetting motion from video, we first get an estimate of the pose sequence using a 3D pose estimation method. From that, we use our method to generate joint rotations for the target character in t-pose (rest pose). If you have the robot configuration in rest pose, you could use our method to get the joint rotations from rest pose to imitate the input human motion. Then what you would do is use the robot controller to match the retargetted motion. Let me know if this doesn't makes sense.