/hmr

Project page for End-to-end Recovery of Human Shape and Pose

Primary LanguagePythonOtherNOASSERTION

End-to-end Recovery of Human Shape and Pose

I modified code from https://github.com/akanazawa/hmr . I added 2D-to-3D color mapping for my human matching paper at Here, and you are welcomed to check out them.

By defualt, I removed mesh (triangle faces) information for fast data loading. If you want to preserve the mesh and visualize the 3D data, you could use the

python demo_bg.py --img_path ../Market/pytorch/gallery/1026/1026_c1s6_038571_06.jpg #please change to your image path

The output 3D data is test.obj. You could use open3d to visualize it. If you has one MacBook, you could visualize the test.obj in the folder.

The original paper is

Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik 'End-to-end Recovery of Human Shape and Pose' CVPR 2018

and my paper is

Zhedong Zheng, Nenggan Zheng and Yi Yang 'Parameter-Efficient Person Re-identification in the 3D Space' ArXiv 2021

Project Page Teaser Image

Requirements

  • Python 2.7
  • TensorFlow tested on version 1.3, demo alone runs with TF 1.12

Installation

Setup virtualenv

conda create --name hmr python=2.7
conda activate hmr
pip install numpy
pip install -r requirements.txt

Install TensorFlow

With GPU:

conda install tensorflow-gpu==1.11.0
pip install open3d 

Without GPU:

conda install tensorflow==1.11.0
pip install open3d 

Generate Market / Duke / MSMT

Please check the datapath before generation.

python generate_3DMarket_bg.py
python generate_3DDuke_bg.py
python generate_3DMSMT_bg.py

baseline without background

python generate_3DMarket.py

Demo

  1. Download the pre-trained models
wget https://people.eecs.berkeley.edu/~kanazawa/cachedir/hmr/models.tar.gz && tar -xf models.tar.gz
  1. Run the demo
python -m demo --img_path data/coco1.png
python -m demo --img_path data/im1954.jpg

Images should be tightly cropped, where the height of the person is roughly 150px. On images that are not tightly cropped, you can run openpose and supply its output json (run it with --write_json option). When json_path is specified, the demo will compute the right scale and bbox center to run HMR:

python -m demo --img_path data/random.jpg --json_path data/random_keypoints.json

(The demo only runs on the most confident bounding box, see src/util/openpose.py:get_bbox)

Training code/data

Please see the doc/train.md!

Citation

If you use this code for your research, please consider citing:

@inProceedings{kanazawaHMR18,
  title={End-to-end Recovery of Human Shape and Pose},
  author = {Angjoo Kanazawa
  and Michael J. Black
  and David W. Jacobs
  and Jitendra Malik},
  booktitle={Computer Vision and Pattern Regognition (CVPR)},
  year={2018}
}

Opensource contributions

Dawars has created a docker image for this project: https://hub.docker.com/r/dawars/hmr/

MandyMo has implemented a pytorch version of the repo: https://github.com/MandyMo/pytorch_HMR.git

Dene33 has made a .ipynb for Google Colab that takes video as input and returns .bvh animation! https://github.com/Dene33/video_to_bvh

bvh bvh2

I have not tested them, but the contributions are super cool! Thank you!!