/WHAM

Primary LanguagePythonMIT LicenseMIT

WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion

PyTorch report Project Open In Colab PWC PWC

demo.mp4

Introduction

This repository is the official Pytorch implementation of WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion. For more information, please visit our project page.

Installation

Please see Installation for details.

Quick Demo

Registration

To download SMPL body models (Neutral, Female, and Male), you need to register for SMPL and SMPLify. The username and password for both homepages will be used while fetching the demo data.

Next, run the following script to fetch demo data. This script will download all the required dependencies including trained models and demo videos.

bash fetch_demo_data.sh

You can try with one examplar video:

python demo.py --video examples/IMG_9732.mov --visualize

We assume camera focal length following CLIFF. You can specify known camera intrinsics [fx fy cx cy] for SLAM as the demo example below:

python demo.py --video examples/drone_video.mp4 --calib examples/drone_calib.txt --visualize

You can skip SLAM if you only want to get camera-coordinate motion. You can run as:

python demo.py --video examples/IMG_9732.mov --visualize --estimate_local_only

You can further refine the results of WHAM using Temporal SMPLify as a post processing. This will allow better 2D alignment as well as 3D accuracy. All you need to do is add --run_smplify flag when running demo.

Docker

Please refer to Docker for details.

Python API

Please refer to API for details.

Dataset

Please see Dataset for details.

Evaluation

# Evaluate on 3DPW dataset
python -m lib.eval.evaluate_3dpw --cfg configs/yamls/demo.yaml TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar

# Evaluate on RICH dataset
python -m lib.eval.evaluate_rich --cfg configs/yamls/demo.yaml TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar

# Evaluate on EMDB dataset (also computes W-MPJPE and WA-MPJPE)
python -m lib.eval.evaluate_emdb --cfg configs/yamls/demo.yaml --eval-split 1 TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar   # EMDB 1

python -m lib.eval.evaluate_emdb --cfg configs/yamls/demo.yaml --eval-split 2 TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar   # EMDB 2

Training

Will be updated.

Acknowledgement

We would like to sincerely appreciate Hongwei Yi and Silvia Zuffi for the discussion and proofreading. Part of this work was done when Soyong Shin was an intern at the Max Planck Institute for Intelligence System.

The base implementation is largely borrowed from VIBE and TCMR. We use ViTPose for 2D keypoints detection and DPVO, DROID-SLAM for extracting camera motion. Please visit their official websites for more details.

TODO

  • Training implementation

  • Colab / Hugging face release

  • Demo for custom videos

Citation

@article{shin2023wham,
    title={WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion},
    author={Shin, Soyong and Kim, Juyong and Halilaj, Eni and Black, Michael J.},
    journal={arXiv preprint 2312.07531},
    year={2023}}

License

Please see License for details.

Contact

Please contact soyongs@andrew.cmu.edu for any questions related to this work.