/ICON

ICON: Implicit Clothed humans Obtained from Normals (CVPR 2022)

Primary LanguagePythonOtherNOASSERTION

ICON: Implicit Clothed humans Obtained from Normals

Yuliang Xiu · Jinlong Yang · Dimitrios Tzionas · Michael J. Black

CVPR 2022

Logo


PyTorch Lightning Google Colab

Paper PDF Project Page youtube views



News 🚩


Table of Contents
  1. Who needs ICON
  2. Installation
  3. Dataset
  4. Training
  5. Evaluation
  6. Add-on
  7. Demo
  8. Citation
  9. Acknowledgments
  10. License
  11. Disclosure
  12. Contact


Who needs ICON?

  • Given a raw RGB image, you could get:
    • image (png):
      • segmented human RGB
      • normal maps of body and cloth
      • pixel-aligned normal-RGB overlap
    • mesh (obj):
      • SMPL-(X) body from PyMAF, PIXIE, PARE, HybrIK, BEV
      • 3D clothed human reconstruction
    • video (mp4):
      • self-rotated clothed human
Intermediate Results
ICON's intermediate results
Iterative Refinement
ICON's SMPL Pose Refinement
Final ResultsFinal Results
ICON's normal prediction + reconstructed mesh (w/o & w/ smooth)
  • If you want to create a realistic and animatable 3D clothed avatar direclty from video / sequential images
    • fully-textured with per-vertex color
    • can be animated by SMPL pose parameters
    • natural pose-dependent clothing deformation
ICON+SCANimate+AIST++
3D Clothed Avatar, created from 400+ images using ICON+SCANimate, animated by AIST++
  • If you want to Train & Evaluate on PIFu/PaMIR/ICON using your own data, please check dataset.md to prepare dataset, training.md for training, and evaluation.md for benchmark evaluation.



Installation

Please follow the Installation Instruction to setup all the required packages, extra data, and models.

Dataset

Please follow the Dataset Instruction to generate the train/val/test dataset from THuman2.0

Training

Please follow the Training Instruction to train your own model using THuman2.0

Evaluation

Please follow the Evaluation Instruction to benchmark trained models on THuman2.0

Add-on

  1. Garment Extraction from Fashion Images, supported by ETH Zürich students as 3DV course project.

Demo

cd ICON

# PIFu* (*: re-implementation)
python -m apps.infer -cfg ./configs/pifu.yaml -gpu 0 -in_dir ./examples -out_dir ./results

# PaMIR* (*: re-implementation)
python -m apps.infer -cfg ./configs/pamir.yaml -gpu 0 -in_dir ./examples -out_dir ./results

# ICON w/ global filter (better visual details --> lower Normal Error))
python -m apps.infer -cfg ./configs/icon-filter.yaml -gpu 0 -in_dir ./examples -out_dir ./results -hps_type {pixie/pymaf/pare/hybrik/bev}

# ICON w/o global filter (higher evaluation scores --> lower P2S/Chamfer Error))
python -m apps.infer -cfg ./configs/icon-nofilter.yaml -gpu 0 -in_dir ./examples -out_dir ./results -hps_type {pixie/pymaf/pare/hybrik/bev}

More Qualitative Results

Comparison
Comparison with other state-of-the-art methods
extreme
Predicted normals on in-the-wild images with extreme poses


Citation

@inproceedings{xiu2022icon,
  title     = {{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
  author    = {Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2022},
  pages     = {13296-13306}
}

Acknowledgments

We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.

Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries

Here are some great resources we benefit from:

Some images used in the qualitative examples come from pinterest.com.

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).




License

This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.

Disclosure

MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB was a part-time employee of Amazon during this project, his research was performed solely at, and funded solely by, the Max Planck Society.

Contact

For more questions, please contact icon@tue.mpg.de

For commercial licensing, please contact ps-licensing@tue.mpg.de