/Visualization-of-Human3.6M-Dataset

Plot and save the ground truth and predicted results of human 3.6 M and CMU mocap dataset.

Primary LanguagePython

Visualization-of-Human3.6M-Dataset

Plot and save the ground truth and predicted results of human 3.6 M and CMU mocap dataset.

human-motion-prediction

This is the code for visulalizing the ground truth and predicted results of human 3.6M dataset.

To save the gif for ground truth data, ru

python forward_kinematics.py --save --save_name "figs/walking.gif"

To save visualization for trained modeld sample.h5, run

 python forward_kinematics.py --sample_name samples.h5 --save --save_name "figs/walking.gif"

Finally, to visualize the samples run

python forward_kinematics.py

This is the sample of walking. I am saving it from text data.



This example of discussion activities of the human3.6m dataset. So our code give you the way to create ground truth gif from the text form of data.

python create_video_gt.py --save --save_name "images/H3.6M/gt/S5/discussion.gif"



In data folder it has only subject 5 due to space constraint.

To download full dataset follow this

wget http://www.cs.stanford.edu/people/ashesh/h3.6m.zip

Acknowledgments

Julieta Martinez, Michael J. Black, Javier Romero. On human motion prediction using recurrent neural networks. In CVPR 17.

It can be found on arxiv as well: https://arxiv.org/pdf/1705.02445.pdf

The code in this repository was written by Julieta Martinez and Javier Romero.

Thank you

Gaurav