demo.mp4
This repository is the official Pytorch implementation of WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion. For more information, please visit our project page.
Please see Installation for details.
To download SMPL body models (Neutral, Female, and Male), you need to register for SMPL and SMPLify. The username and password for both homepages will be used while fetching the demo data.
Next, run the following script to fetch demo data. This script will download all the required dependencies including trained models and demo videos.
bash fetch_demo_data.sh
You can try with one examplar video:
python demo.py --video examples/IMG_9732.mov --visualize
We assume camera focal length following CLIFF. You can specify known camera intrinsics [fx fy cx cy] for SLAM as the demo example below:
python demo.py --video examples/drone_video.mp4 --calib examples/drone_calib.txt --visualize
You can skip SLAM if you only want to get camera-coordinate motion. You can run as:
python demo.py --video examples/IMG_9732.mov --visualize --estimate_local_only
You can further refine the results of WHAM using Temporal SMPLify as a post processing. This will allow better 2D alignment as well as 3D accuracy. All you need to do is add --run_smplify
flag when running demo.
Please refer to Docker for details.
Please refer to API for details.
Please see Dataset for details.
# Evaluate on 3DPW dataset
python -m lib.eval.evaluate_3dpw --cfg configs/yamls/demo.yaml TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar
# Evaluate on RICH dataset
python -m lib.eval.evaluate_rich --cfg configs/yamls/demo.yaml TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar
# Evaluate on EMDB dataset (also computes W-MPJPE and WA-MPJPE)
python -m lib.eval.evaluate_emdb --cfg configs/yamls/demo.yaml --eval-split 1 TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar # EMDB 1
python -m lib.eval.evaluate_emdb --cfg configs/yamls/demo.yaml --eval-split 2 TRAIN.CHECKPOINT checkpoints/wham_vit_w_3dpw.pth.tar # EMDB 2
Will be updated.
We would like to sincerely appreciate Hongwei Yi and Silvia Zuffi for the discussion and proofreading. Part of this work was done when Soyong Shin was an intern at the Max Planck Institute for Intelligence System.
The base implementation is largely borrowed from VIBE and TCMR. We use ViTPose for 2D keypoints detection and DPVO, DROID-SLAM for extracting camera motion. Please visit their official websites for more details.
-
Training implementation
-
Colab / Hugging face release
-
Demo for custom videos
@article{shin2023wham,
title={WHAM: Reconstructing World-grounded Humans with Accurate 3D Motion},
author={Shin, Soyong and Kim, Juyong and Halilaj, Eni and Black, Michael J.},
journal={arXiv preprint 2312.07531},
year={2023}}
Please see License for details.
Please contact soyongs@andrew.cmu.edu for any questions related to this work.