Tensorflow (1.13) implementation of the CAPE model, a Mesh-CVAE with a mesh patch discriminator, for dressing SMPL bodies with pose-dependent clothing, introduced in the CVPR 2020 paper:
Learning to Dress 3D People in Generative Clothing
We recommend creating a new virtual environment for a clean installation of the dependencies. The code has been tested on Ubuntu 18.04, python 3.6 and CUDA 10.0.
python3 -m venv $HOME/.virtualenvs/cape
source $HOME/.virtualenvs/cape/bin/activate
pip install -U pip setuptools
- Install PSBody Mesh package. Currently we recommend installing version 0.3.
- Install smplx python package. Follow the installation instructions there, download and setup the SMPL body model.
- Then simply run
pip install -r requirements.txt
(do this at last to ensurenumpy==1.16.1
).
Download the checkpoint and put this checkpoint folder under the checkpoints
folder. Then run:
python main.py --config configs/config.yaml --mode demo --vis_demo 1 --smpl_model_folder <path to SMPL model folder>
It will generate a few clothed body meshes in the results/
folder and show on-screen visualization.
We are currently working on making the new train / test splits corresponding to the public CAPE dataset*, along with the data processing / loading script, and will update here soon.
* The public release of the CAPE dataset slightly differs from what we used in the paper due to the removal of subjects that did not grant public release consent.
Check out our project website for the new CAPE dataset, featuring approximately 150K dynamnic clothed human mesh registrations from real scan data, with consistent topology, making it an alternative to the popular Dynamic Faust dataset for 3D shape training and evaluation, and yet has more diverse shape deformations.
Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the CAPE data and software, (the "Dataset & Software"), including 3D meshes, pose parameters, scripts, and animations. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.
The SMPL body related files data/{template_mesh.obj, edges_smpl.npy}
are subject to the license of the SMPL model. The PSBody mesh package and smplx python package are subject to their own licenses.
If you find our code / paper / data useful to your research, please consider citing:
@inproceedings{CAPE:CVPR:20,
title = {Learning to Dress 3D People in Generative Clothing},
author = {Ma, Qianli and Yang, Jinlong and Ranjan, Anurag and Pujades, Sergi and Pons-Moll, Gerard and Tang, Siyu and Black, Michael J.},
booktitle = {Computer Vision and Pattern Recognition (CVPR)},
month = jun,
year = {2020},
month_numeric = {6}
}
The model and codes are based on CoMA (ECCV18'), a convolutional mesh autoencoder. If you find the code of this repository useful, please consider also citing:
@inproceedings{COMA:ECCV18,
title = {Generating {3D} faces using Convolutional Mesh Autoencoders},
author = {Anurag Ranjan, Timo Bolkart, Soubhik Sanyal, and Michael J. Black},
booktitle = {European Conference on Computer Vision (ECCV)},
pages = {725--741},
publisher = {Springer International Publishing},
year = {2018},
}