Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black
- [2022/09/12] Apply KeypointNeRF on ICON, quantitative numbers in evaluation
- [2022/07/30] are both available
- [2022/07/26] New cloth-refinement module is released, try
-loop_cloth
- [2022/06/13] ETH Zürich students from 3DV course create an add-on for garment-extraction
- [2022/05/16] BEV is supported as optional HPS by Yu Sun, see commit #060e265
- [2022/05/15] Training code is released, please check Training Instruction
- [2022/04/26] HybrIK (SMPL) is supported as optional HPS by Jiefeng Li, see commit #3663704
- [2022/03/05] PIXIE (SMPL-X), PARE (SMPL), PyMAF (SMPL) are all supported as optional HPS
Table of Contents
-
If you want to Train & Evaluate on PIFu / PaMIR / ICON using your own data, please check dataset.md to prepare dataset, training.md for training, and evaluation.md for benchmark evaluation.
-
Given a raw RGB image, you could get:
- image (png):
- segmented human RGB
- normal maps of body and cloth
- pixel-aligned normal-RGB overlap
- mesh (obj):
- SMPL-(X) body from PyMAF, PIXIE, PARE, HybrIK, BEV
- 3D clothed human reconstruction
- 3D garments (requires 2D mask)
- video (mp4):
- self-rotated clothed human
- image (png):
ICON's intermediate results |
ICON's SMPL Pose Refinement |
Image -- overlapped normal prediction -- ICON -- refined ICON |
3D Garment extracted from ICON using 2D mask |
- See docs/installation.md to install all the required packages and setup the models
- See docs/dataset.md to synthesize the train/val/test dataset from THuman2.0
- See docs/training.md to train your own model using THuman2.0
- See docs/evaluation.md to benchmark trained models on CAPE testset
- Add-on: Garment Extraction from Fashion Images, supported by ETH Zürich students as 3DV course project.
cd ICON
# model_type:
# "pifu" reimplemented PIFu
# "pamir" reimplemented PaMIR
# "icon-filter" ICON w/ global encoder (continous local wrinkles)
# "icon-nofilter" ICON w/o global encoder (correct global pose)
python -m apps.infer -cfg ./configs/icon-filter.yaml -gpu 0 -in_dir ./examples -out_dir ./results -export_video -loop_smpl 100 -loop_cloth 200 -hps_type pymaf
Comparison with other state-of-the-art methods |
Predicted normals on in-the-wild images with extreme poses |
@inproceedings{xiu2022icon,
title = {{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
author = {Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {13296-13306}
}
We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.
Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries
Here are some great resources we benefit from:
- MonoPortDataset for Data Processing
- PaMIR, PIFu, PIFuHD, and MonoPort for Benchmark
- SCANimate and AIST++ for Animation
- rembg for Human Segmentation
- PyTorch-NICP for normal-based non-rigid refinement
- smplx, PARE, PyMAF, PIXIE, BEV, and HybrIK for Human Pose & Shape Estimation
- CAPE and THuman for Dataset
- PyTorch3D for Differential Rendering
Some images used in the qualitative examples come from pinterest.com.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB was a part-time employee of Amazon during this project, his research was performed solely at, and funded solely by, the Max Planck Society.
For more questions, please contact icon@tue.mpg.de
For commercial licensing, please contact ps-licensing@tue.mpg.de