/GVHMR

Code for "GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates", Siggraph Asia 2024

Primary LanguageJupyter NotebookOtherNOASSERTION

GVHMR: World-Grounded Human Motion Recovery via Gravity-View Coordinates

World-Grounded Human Motion Recovery via Gravity-View Coordinates
Zehong Shen*, Huaijin Pi*, Yan Xia, Zhi Cen, Sida Peng, Zechen Hu, Hujun Bao, Ruizhen Hu, Xiaowei Zhou
SIGGRAPH Asia 2024

animated

TODO List and ETA

  • Code for reproducing the train and test results (2024-8-5)
  • Demo code (2024-8-5)
  • Code release (2024-9-4)
  • Arxiv paper link (2024-9-10)
  • Google Colab demo (2024-9-15)
  • HuggingFace demo (2024-9-15)

Setup

Please see installation for details.

Quick Start

Demo

Demo entries are provided in tools/demo. Use -s to skip visual odometry if you know the camera is static, otherwise the camera will be estimated by DPVO. We also provide a script demo_folder.py to inference a entire folder.

python tools/demo/demo.py --video=docs/example_video/tennis.mp4 -s
python tools/demo/demo_folder.py -f inputs/demo/folder_in -d outputs/demo/folder_out -s

Reproduce

  1. Test: To reproduce the 3DPW, RICH, and EMDB results in a single run, use the following command:

    python tools/train.py global/task=gvhmr/test_3dpw_emdb_rich exp=gvhmr/mixed/mixed ckpt_path=inputs/checkpoints/gvhmr/gvhmr_siga24_release.ckpt

    To test individual datasets, change global/task to gvhmr/test_3dpw, gvhmr/test_rich, or gvhmr/test_emdb.

  2. Train: To train the model, use the following command:

    # The gvhmr_siga24_release.ckpt is trained with 2x4090 for 420 epochs, note that different GPU settings may lead to different results.
    python tools/train.py exp=gvhmr/mixed/mixed

    During training, note that we do not employ post-processing as in the test script, so the global metrics results will differ (but should still be good for comparison with baseline methods).

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{shen2024gvhmr,
  title={World-Grounded Human Motion Recovery via Gravity-View Coordinates},
  author={Shen, Zehong and Pi, Huaijin and Xia, Yan and Cen, Zhi and Peng, Sida and Hu, Zechen and Bao, Hujun and Hu, Ruizhen and Zhou, Xiaowei},
  booktitle={SIGGRAPH Asia Conference Proceedings},
  year={2024}
}

Acknowledgement

We thank the authors of WHAM, 4D-Humans, and ViTPose-Pytorch for their great works, without which our project/code would not be possible.