Question about applying pose imitator for pose estimator step
SonNguyen2510 opened this issue · 3 comments
Dear author,
Thank you for your amazing work.
Currently I'm learning about 3D pose estimation and I'm really impressed by your work. As I understand so far, your pose estimator_inference step (when test with videos) doesn't apply pose imitator. Could you tell me the reason why you dont apply it?
And if I want to apply the imitator, can you tell me how can I do it? Sorry I am new to this pose estimation field.
Thank you so much for your time.
Thank you for the interest!
The core idea of this frame is:
(A). the estimator converts 2D poses to implausible 3D motion,
(B). the imitator generate plausible 3D motion by adding physical correction,
(C). the hallucinator acts as a generator to diversify the motion.
If you want to apply the imiator for in the wild case, you may replace the H36M reference motion of (A) by in-the-wild result, then apply imitator (B) with pretrained weight, or finetune it with some iteration (e.g. 100 iter) for better fitting.
In detail:
The first thing is prepare a pre-trained imitator weight, this can be done by
step a: download the checkpoint folder here, which contain reference motion in each round, (the imitator weight is not provide here because the process of export data out from server is too complicated).
step b: Take the last round and train the imitator from line 268 to line 276 in script.
The second thing is prepare reference motion, this can be done by:
step a: use the estimator_inference save a pickle file in joint-xyz format pickle file.
step b: replace the varible H36M motion predicted_3d_wpos_withroot
at line 318 by the in-the-wild motion.
step c: change the takes
at line 336 to the number of video you want to imitate. The original 600 is H36M with 600 clip.
step d: change the configuration file of imitator from H36M to yours, and run something similar from line 268 to line 273 to prepare the in-the-wild motion reference trajectory.
The third thing is try imitator on the in-the-wild reference motion:
step a: try the imitator inference at line 278 directly,
step b: or finetune it use line 276 with --iter 6000
to finetune it.
hope this helps. :)
Thank you so much for your detail explaination!
Thank you so much for your detail explaination!
hello,thank you for your work first. I'm new to this. i got itr6000 pkl directly using helix5 training commands in the script , and i'm wondering how i can physically optimize my own wild video. i would appreciate your reply.