/human_motion_manifold

Implementation of Constructing Human Motion Manifold with Sequential Networks in PyTorch

Primary LanguagePythonMIT LicenseMIT

Constructing Human Motion Manifold with Sequential Networks in PyTorch (Official)

Python Pytorch


Constructing Human Motion Manifold with Sequential Networks
Deok-Kyeong Jang, Sung-Hee Lee
Eurographics 2021 Conference, in Computer Graphics Forum 2020

Paper: https://arxiv.org/abs/2005.14370
Project: http://motionlab.kaist.ac.kr/?page_id=5962
Video: https://www.youtube.com/watch?v=DPXnidbmtvs

Abstract: This paper presents a novel recurrent neural network-based method to construct a latent motion manifold that can represent a wide range of human motions in a long sequence. We introduce several new components to increase the spatial and temporal coverage in motion space while retaining the details of motion capture data. These include new regularization terms for the motion manifold, combination of two complementary decoders for predicting joint rotations and joint velocities, and the addition of the forward kinematics layer to consider both joint rotation and position errors. In addition, we propose a set of loss terms that improve the overall quality of the motion manifold from various aspects, such as the capability of reconstructing not only the motion but also the latent manifold vector, and the naturalness of the motion through adversarial loss. These components contribute to creating compact and versatile motion manifold that allows for creating new motions by performing random sampling and algebraic operations, such as interpolation and analogy, in the latent motion manifold.

Requirements

  • Pytorch >= 1.5
  • Tensorboard 2.4.1
  • h5py 2.9.0
  • tqdm 4.35.0
  • PyYAML 5.4.
  • matplotlib 3.1.1

Installation

Clone this repository and create environment:

git clone https://github.com/DK-Jang/human_motion_manifold.git
cd human_motion_manifold
conda create -n motion_manifold python=3.6
conda activate motion_manifold

First, install PyTorch >= 1.5 and torchvision from PyTorch.
Install the other dependencies:

pip install -r requirements.txt 

Datasets and pre-trained networks

To train Human Motion Manifold network, please download the dataset. To run the demo, please download the dataset and pre-trained weight both.

H3.6M dataset. To download the H3.6M dataset(npz) from Google Drive. Then place the npz file directory within dataset/.
After that, run the following commands:

cd dataset
python make_train_test_folder.py

Pre-trained weight. To download the weight from Google Drive. Then place the pt file directory within pretrained/pth/.

How to run the demo

After downloading the pre-trained weights, you can run the demo.

  • Reconsturction motion, run following commands:
python reconstruction.py --config pretrained/info/config.yaml   # generate motions
python result2bvh.py --bvh_dir ./pretrained/output/recon/bvh \
                     --hdf5_path ./pretrained/output/recon/m_recon.hdf5    # hdf5 to bvh 

Generated motions(hdf5 format) will be placed under ./pretrained/output/recon/*.hdf5.
m_gt.hdf5: ground-truth motion,
m_recon.hdf5: generated from joint rotation decoder,
m_recon_vel.hdf5: generated from joint velocity decoder.
Generated motions(bvh format) from joint rotation decoder will be placed under ./pretrained/output/recon/bvh/batch_*.bvh.

  • Random sample motions from motion manifold:
python random_sample.py --config pretrained/info/config.yaml
python result2bvh.py --bvh_dir ./pretrained/output/random_sample/bvh \
                     --hdf5_path ./pretrained/output/random_sample/m_recon.hdf5

Generated motions will be placed under ./pretrained/output/random_sample/*.hdf5.
Generated motions(bvh format) from joint rotation decoder will be placed under ./pretrained/output/random_sampling/bvh/batch_*.bvh.

Todo

  • Denosing
  • Interpolation
  • Analogy

How to train

To train human motion manifold networks from the scratch, run the following commands.

python train.py --config configs/H3.6M.yaml

Trained networks will be placed under ./motion_manifold_network/

Visualization

Easy way to visualize reconstruction results using matplotlib. You should save demo results as world positions.
For example:

python reconstruction.py --config pretrained/info/config.yaml \
                         --output_representation positions_world
python visualization.py --viz_path ./pretrained/output/recon/m_recon.hdf5

Citation

If you find this work useful for your research, please cite our paper:

@inproceedings{jang2020constructing,
  title={Constructing human motion manifold with sequential networks},
  author={Jang, Deok-Kyeong and Lee, Sung-Hee},
  booktitle={Computer Graphics Forum},
  volume={39},
  number={6},
  pages={314--324},
  year={2020},
  organization={Wiley Online Library}
}

Acknowledgements

This repository contains pieces of code from the following repositories:
QuaterNet: A Quaternion-based Recurrent Model for Human Motion.
A Deep Learning Framework For Character Motion Synthesis and Editing.